I don’t think the people raising this as a concern are trying to solve the problem of bigots on the internet; they are just asking for you to change the advertising you provide to remove the bigots from a place of visibility.
I’m gay
I don’t think the people raising this as a concern are trying to solve the problem of bigots on the internet; they are just asking for you to change the advertising you provide to remove the bigots from a place of visibility.
And if the racist is here to cause problems rather than commiserate with fellow racists, they now know exactly which community to avoid, thus restoring moderation problems everywhere. I don’t think anyone is asking you to moderate every instance to ensure they are sticking to your TOS or your viewpoints, but it’s a very minor ask to not showcase off the racists and transphobes and bigots on the ‘join this platform’ page.
Eh, all chat bots trained on internet data become horny in the same way that virtually all AI is racist. It’s merely a reflection of the data it’s trained on, but a lot of people think they can magically control it by putting ‘safeguards’ after it’s been trained. The constant game of cat and mouse with gpt3/4 and prompts to getting it to ignore safeguards are proof this doesn’t work.
You can fix this through additional weighted layers and more complex upstream training tweaks, but then you have to pay to retrain which is by far the most expensive part about open AIs model.
Fascinating, do you happen to have a link? No worries if it’s buried or you don’t remember, but I’d love to read it
We don’t have any tools to treat different instances differently except defederating at this time
Probably most people interacting in our space are doing so through this method. It’s something we have always paid attention to. In the long, long run we will likely have to do something about it.
Practically speaking we will likely have less tolerance for users like this who misbehave and we may have less discussions with them and simply ban and remove their content quicker.
Okay you specifically have carte blanche to ping me to tell me how awesome I am. But please try to keep it to less than maybe 30 pings a month
? why are you pinging me
I’m lost here, what are you trying to say
AI is programmed by humans or trained on human data. Either we’re dealing in extremes where it’s impossible to not have bias (which is important framing to measure bias) or we’re talking about how to minimize bias not make it perfect.
The ways to control for algorithmic bias are typically through additional human developed layers to counteract bias present when you ingest large datasets to train. But that’s extremely work intensive. I’ve seen some interesting hypotheticals where algorithms designed specifically to identify bias can be used to tune layers with custom weighting to attempt to pull bias back down to acceptable levels, but even then we’ll probably need to watch how this changes language about groups for which there is bias.
We sure are, I think it should be in the sidebar but if it’s not, https://opencollective.com/beehaw
Yup, if you’re denied or in limbo you won’t be able to log in
I greatly appreciate the feedback and I have taken a few steps back this week to ensure I’m not overextending myself. Thank you for looking out for me 🥰💜
I think we’re approving around half, but I’m not sure if that’s kept up today. To share my thoughts on the matter I’m extremely concerned about the possibility of this place moving the direction of echo chambers for a variety of reasons. I’ll probably make another philosophy post in the next week or so, but I’ve been very overextended between this, work, current healthcare issues (I’ve had 2 surgeries in the last two weeks 😩), stuff for pride month (I’m a leader at my work’s pride ERG, moderated a speaker today, speaking for a group next week, gotta help with SF pride, etc), and in general being busy in my social life as well so I haven’t really had a ton of time to contribute all my thoughts or put them on paper as I’d like to.
We want them to fit in with our philosophy, so we’re looking that they paid attention to our rule, our ethos, and our ambiance. To be clear, we’re not making judgements, but if you leave it blank or only talk about federation, we can’t be certain that you will vibe with how we moderate and what we’re trying to do here.
What this doesn’t show is active users, just total. For quite some time we’ve been one of the most active large instances. I don’t remember the exact timing, but we’ve held the spot of 3rd most active for some time, we just now have total users to match that.
To be fair, it’s an investment firm evaluating the financial evaluation of a publicly traded company, that’s like asking a corporation why they care about profits. On principle I agree that not everything in humanity needs a financial value to have value to humans.
Ahhh yeah claims and payment processing is typically done by the government or by insurance companies and is definitely a valid risk. They already do as much as they can to automate - they’re mostly concerned with whether they’re making profit or keeping costs down. That is absolutely a sector to be worried about.
I’m not here to proselytize about what we decide to block or not. I’m explaining what the person above is requesting - not a block, but a conscious decision about what shows up on the join-lemmy list.