I was thinking about moderation in PieFed after reading @[email protected] mention he doesn’t want NSFW content because it creates more work to moderate. But if done right, moderation shouldn’t fall heavily on admins at all.
One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the usual complaints about power-mods. Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.
On Discourse, moderation power gets distributed across active, trusted users. You don’t see the same tension between “users vs. mods,” and it scales much better without requiring admins to constantly police content. That sort of system feels like a much healthier direction for PieFed.
Implementing this system could involve establishing trust levels based on user engagement within each community. Users could earn these trust levels by spending time reading discussions. This could either be community-specific—allowing users to build trust in different communities—or instance-wide, giving a broader trust recognition based on overall activity across the instance. However, if not executed carefully, this could lead to issues such as overmoderation similar to Stack Overflow, where genuine contributions may be stifled, or it might encourage karma farming akin to Reddit, where users attempt to game the system using bots to repost popular content repeatedly.
Worth checking out this related discussion:
Rethinking Moderation: A Call for Trust Level Systems in the Fediverse.
Most federated platforms just copy that model instead of proven alternatives like Discourse’s trust level system.
Speaking for myself - I’m not opposed to taking some elements of this level-up system that gives users more rights as they show that they’re not a troll. To what extent would vary though. Discourse seems to be somewhat different type of forum to Lemmy or Piefed.
One of the biggest flaws of Reddit is the imbalance between users and moderators—it leads to endless reliance on automods, AI filters, and the same complaints about power-hungry mods.
I cannot imagine any reddit-clone not, at some point, needing to rely on automoderation tools.
And when used correctly, them being a great boon - e.g. users can set up automated keyword filters (allowing None, All, or even just Some of the content through), which is a decision made by oneself not someone else.
Community mods likewise choose whatever they are comfortable with - e.g. if you dislike people who downvote literally every post in the community, then it helps to have tools to detect and put a stop to that. The main thing there is (it seems to me) to remain on top of understanding and making appropriate use of the tool so that it isn’t doing something that you do not approve of - e.g. in the aforementioned example perhaps downvoting two posts in the community should not trigger a ban (however short in duration) whereas downvoting twenty should (so then what about 10? 5?).
Concerning NSFW in specific: I’m not sure, there might be more issues. I’ve had a look at lemmynsfw and a few others and had a short run-in with moderation. Most content is copied there by random people without consent of the original creators. So we predominantly got copyright issues and some ethical woes if it’s amateur content that gets taken and spread without the depicted people having any sort of control over it. If I were in charge of that, I’d remove >90% of the content, plus I couldn’t federate content without proper age-restriction with how law works where I live.
But that doesn’t take away from the broader argument. I think an automatic trust level system and maybe even a web of trust between users could help with some things. It’s probably a bit tricky to get it right not to introduce stupid hierarchies. Or incentivise Karma-farming and these things.
Discourse’s Trust Levels are an interesting idea, but not one that is novel. It was lifted almost entirely from Stack Overflow. At the time, Discourse and Stack Overflow had a common founder, Jeff Atwood.
There’s a reason Stack Overflow is rapidly fading into obscurity… its moderation team (built off of trust levels) destroyed the very foundation of what made Stack Overflow good.
I am also not saying that what we have now (first-mover moderation or top-down moderation granting) is better… merely that should you look into this, tread lightly.
With all the reports on countries manipulating online content. that’s like leaving your house in a crime ridden Neighbourhood with the door wide option. as far as i know there is no way to prevent bots when dealing with a highly sophisticated actor like china (which meta reported manipulated online content).
One thing that can help is keeping states about reports user made. if say 90% of the reports are good and the number of reports is say over 50 the person can become a mod. you could also have chart scoring who is the best reporter. The number can be tweaked and you could do some statistical analysis finding the optimal numbers.