• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: August 9th, 2023

help-circle


  • Looks delicious!! This is one of my favorite go-tos for a filling and hearty meal.

    Not a criticism, but in case that’s a bay leaf hiding in there, you probably want to pull it out before eating. Bay leaves do their best work without needing to eat them. They’re not toxic or anything (that’s a common myth), but you definitely don’t want to accidentally bite into that thing. Your mouth will regret it…

    Think I’ll make some curry tonight though, thanks for the inspiration!


  • I’m sure you know, but you’re probably going to get a lot of grief for this. I’m deeply suspicious of any new AI tool, especially one that tries to get in between me and my news (looking at you Feedly), and I’m sure I’m not the only one. So if you’re not already, I’d prepare yourself for a lot of strong emotions, and probably not in a good way.

    If you wanted to get ahead of that kind of thing, you might want to explain what kinds of safeties you’re building into it. For example, on your roadmap you say want it to “Generate argument of for and against perspective then summarise the result of the 2 arguments.” This kind of thing in particular is quite risky. Any time you try to introduce value statements into an LLM summary, you’re in the danger zone. Even if you’re just trying to summarize the actual perspective of the piece, you’re basically just begging the LLM to hallucinate. But asking it to summarize hypothetical opposing arguments is just asking for trouble.

    I could go on, but I don’t want to start a pile on. I appreciate when folks try to build cool stuff, you’ve just waded into some choppy waters…





  • That’s a very cool concept. I’d definitely be willing to participate in a platform that has that kind of trust system baked in, as long as it respected my privacy and couldn’t broadcast how much time I spend on specific things etc. Instance owners would also potentially get access to some incredibly personal and lucrative user data, so protections would have to be strict. But I guess there are a lot of ways to get at positive user engagement in a non-invasive way. I think it could solve a lot of current and potential problems. I wish I was confident the majority of users would be into it, but I’m not so sure.



  • I think by default bots should not be allowed anywhere. But if that’s a bridge too far, then their use should have to be regularly justified and explained to communities. Maybe it should even be a rule that their full code has to be released on a regular basis, so users can review it themselves and be sure nothing fishy is going on. I’m specifically thinking of the Media Bias Fact Checker Bot (I know, I harp on it too much). It’s basically a spammer bot at this point, cluttering up our feeds even when it can’t figure out the source, and providing bad and inaccurate information when it can. And mods refuse to answer for it.


  • I just subbed, thanks. This is kind of my fundamental challenge with this platform, though. I don’t want to miss anything on the subjects I’m interested in, so I sub to every instances’ version of the same community. I’m probably doing it wrong, but if I sub to just one small sub-community because I like the mods, or the lack of bots, I feel like I’d be missing a lot of content.


  • I agree, it’s already happening. The Media Bias Fact Checker bot is another example. Nobody I’ve interacted with wants it, it is functionally useless and inaccurate, and appears to be a cash grab (though we can’t know for sure, because the mods refuse to openly discuss it with users). We live in a capitalist society, so even platforms like Lemmy are subject to its pressures, and require active pushback from users to prevent profits from taking precedence over user satisfaction on larger instances.