I don’t disagree, but it’s probably not that easy. Universities in my country don’t have the resources anymore to do many orals, and depending on the subject exams don’t test the same skills as coursework.
I don’t disagree, but it’s probably not that easy. Universities in my country don’t have the resources anymore to do many orals, and depending on the subject exams don’t test the same skills as coursework.
It’s not just the internet. For example, students are handing in essays straight from ChatGPT. Uni scanners flag it and the students may fail. But there is no good evidence either side, the uni side detection is unreliable (and unlikely to improve on false positives, or negatives for that matter) and it’s hard for the student to prove they did not use an LLM. Job seekers send in LLM generated letters. Consultants probably give LLM based reports to clients. We’re doomed.
Still dead, and not on iOS App Store anymore
Seems to be back up, just seen a new post there
Probably. On Reddit, some of it can managed at community (subreddit) level by bots automatically deleting posts or comments from recently joined people. Maybe a tiered system of mod privileges could work, where a junior mod can delete spam/offensive posts but not ban people. Mind you, banning people is not really effective in a fediverse where you can easily create new user accounts, on another instance.
Interesting idea. But after thinking about it for a few minutes, i don’t think federated reputation would work for moderation privileges. Instances have their own rules, and i would not trust a hexbear mod to behave in line with lemmy.world rules and values. The same is true for communities really.
Voyager on iOS has keyword filters