SO good. Not stringy, naturally sweet, especially when roasted like this it gets all nice and caramelized. It’s pretty versatile in lots of different dishes.
Once you manage to crack into it, that is. Need a sharp knife and patience.
SO good. Not stringy, naturally sweet, especially when roasted like this it gets all nice and caramelized. It’s pretty versatile in lots of different dishes.
Once you manage to crack into it, that is. Need a sharp knife and patience.
Looks delicious!! This is one of my favorite go-tos for a filling and hearty meal.
Not a criticism, but in case that’s a bay leaf hiding in there, you probably want to pull it out before eating. Bay leaves do their best work without needing to eat them. They’re not toxic or anything (that’s a common myth), but you definitely don’t want to accidentally bite into that thing. Your mouth will regret it…
Think I’ll make some curry tonight though, thanks for the inspiration!
I’m sure you know, but you’re probably going to get a lot of grief for this. I’m deeply suspicious of any new AI tool, especially one that tries to get in between me and my news (looking at you Feedly), and I’m sure I’m not the only one. So if you’re not already, I’d prepare yourself for a lot of strong emotions, and probably not in a good way.
If you wanted to get ahead of that kind of thing, you might want to explain what kinds of safeties you’re building into it. For example, on your roadmap you say want it to “Generate argument of for and against perspective then summarise the result of the 2 arguments.” This kind of thing in particular is quite risky. Any time you try to introduce value statements into an LLM summary, you’re in the danger zone. Even if you’re just trying to summarize the actual perspective of the piece, you’re basically just begging the LLM to hallucinate. But asking it to summarize hypothetical opposing arguments is just asking for trouble.
I could go on, but I don’t want to start a pile on. I appreciate when folks try to build cool stuff, you’ve just waded into some choppy waters…
Saganaki is just a Greek appetizer made of frying cheeses, sometimes including halloumi, but not necessarily. The way you describe it is one typical way it’s prepared.
It’s cheese from Cyprus that fries without melting and has a neat squeaky feeling when you chew it. It’s perfect sliced and sauteed in a skillet with some butter. Finish it with some lemon juice for the traditional Cypriot style. The dense texture makes it ideal for tossing into salads. Fucking love halloumi.
Looks awesome! I’ve done similar salads and found that a pomegranate reduction is the perfect dressing for this type of combo. Think it’s time for another lunch…
That’s a very cool concept. I’d definitely be willing to participate in a platform that has that kind of trust system baked in, as long as it respected my privacy and couldn’t broadcast how much time I spend on specific things etc. Instance owners would also potentially get access to some incredibly personal and lucrative user data, so protections would have to be strict. But I guess there are a lot of ways to get at positive user engagement in a non-invasive way. I think it could solve a lot of current and potential problems. I wish I was confident the majority of users would be into it, but I’m not so sure.
For sure, it’s not an easy problem to address. But I’m not willing to give up on it just yet. Bad actors will always find a way to break the rules and go under the radar, but we should be making new rules and working to improve these platforms in good faith, with the assumption that most people want healthy communities that follow the rules.
I think by default bots should not be allowed anywhere. But if that’s a bridge too far, then their use should have to be regularly justified and explained to communities. Maybe it should even be a rule that their full code has to be released on a regular basis, so users can review it themselves and be sure nothing fishy is going on. I’m specifically thinking of the Media Bias Fact Checker Bot (I know, I harp on it too much). It’s basically a spammer bot at this point, cluttering up our feeds even when it can’t figure out the source, and providing bad and inaccurate information when it can. And mods refuse to answer for it.
I just subbed, thanks. This is kind of my fundamental challenge with this platform, though. I don’t want to miss anything on the subjects I’m interested in, so I sub to every instances’ version of the same community. I’m probably doing it wrong, but if I sub to just one small sub-community because I like the mods, or the lack of bots, I feel like I’d be missing a lot of content.
I agree, it’s already happening. The Media Bias Fact Checker bot is another example. Nobody I’ve interacted with wants it, it is functionally useless and inaccurate, and appears to be a cash grab (though we can’t know for sure, because the mods refuse to openly discuss it with users). We live in a capitalist society, so even platforms like Lemmy are subject to its pressures, and require active pushback from users to prevent profits from taking precedence over user satisfaction on larger instances.
Yes please, I’ll have one of those.
That’s one good-looking plate!