So, some part(s) of Freenet has/had (this one has not been there in a while) a “web of trust” system that allowed a user to take part in a sort of graph-based collective filtering framework. It works something like this:
- You choose who you trust to produce valid content (not spam or attacks on the network, that sort of thing). Others do the same.
- You choose how much you trust others’ trust values. This then combines with your own trust values to establish a sort of total trust value. If you don’t know Bob but your friend does, you can probably assume Bob isn’t going to be a problem with some level of confidence
Seeing the (vastly overblown, considering the nature of federation) controversy involving Beehaw and various comments about moderation tools and users wanting more of less exposure to various kinds of content, this one thought that perhaps a web of trust sort of thing, or some related concepts could be useful. Such a system could possibly be used to allow groups of users with similar preferences to implement their own filtering preferences in a way that moves the effort from moderators making general rules and judgments and users all having to make their own judgments (including vulnerable ones who need to be protected from certain content or they will, at best, leave): Riikka needn’t browse m/HatefulPricks (just contriving a mag; maybe that doesn’t exist :P ) looking for people to block before they go after her if someone whose judgment she’s flagged as trusted has already encountered those people.
[Please pretend there’s another interesting idea here. Riikka forgot what else she wanted to say :( ]
Maybe it’s useless here, or maybe some components or related ideas could help. Seems like at least it could be worth a try applying to filtering undesirable (or simply harmful, as is/was the intent of Freenet’s WoT) content. Even just having a handful of guardians (or maybe some sort of service identity like a “No Bigots” user that some actual user(s) use only to mark bigots, another for cat haters, something like that?) could maybe help. If someone gets hacked or somehow goes bad, just untrust their trust list and bam, fixed. Everything’s still there, just hidden. Of course, for it to really reduce mod workload (particularly somewhere like Beehaw) there would have to be some work involved on many sides of the implementation. Maybe some kind of concept of user age or other validating factors, some default configuration (so brand new users aren’t exposed to a bunch of garbage no one else sees) or “go subscribe to the ‘no garbage’ filter list!” as a recommended step in account creation, and of course people would still have to be there to spot and mark unwanted content.
Just an idea (and a crapload of yapping… sorry (sortof)!) Thoughts?