As quoted from the linked post.
It looks like you’re part of one of our experiments. The logged-in mobile web experience is currently unavailable for a portion of users. To access the site you can log on via desktop, the mobile apps, or wait for the experiment to conclude.
This is separate from the API issue. This will actually BLOCK you from even viewing reddit on your phone without using the official app.
Archive.org link in case the post is removed.
I in general find lay people have a very weak understanding of how research functions. This is a very generic statement, but everything from IRB processes to how science is reported in manuscripts and everything in between tends to be a quagmire, and this is absolutely with recognition that some of this process is mired in red-tape, bureaucracy, and endless administration.
For example, there’s a long-standing idea that IRBs are the gatekeepers of research. In reality, any IRB worth their weight (and really, all of them are for compliance) should be viewed as a research stakeholder. They should be there to make research happen and let scientists do the best research they can with the minimal amount of harm to participants. Sometimes this involves compromises, or finding alternatives that are less harmful, and this is a good thing. No researcher should dislike their IRB at all, but criticizing unnecessary process & paperwork is very much a matter of process rather than a matter of what the IRB does. A IRB taking a long time to turn something around is a different matter from the IRB exhaustively reviewing the proposed work and returning questions that need to be addressed.
Another common example is scientific studies are frequently criticized about sample sizes. Yes, a lot of research would definitely benefit from better sampling and larger samples, but narrowly focusing on sample sizes misses a lot of the other considerations taken for evaluating statistical power. For example, if one wants to know whether beheading people results in injuries incongruent with life, one doesn’t really need a large sample to come to this conclusion because the effect (size) is so large. Of course more numbers help, but past a point more numbers only add to the cost of the research without measurably improving the quality of the statistical inferences made. We could instead save some of this money and repurpose it for repeating a study.
And on that note, study replication is very much a really needed thing, particularly in applied research areas where we care a lot about behavioral outcomes. We don’t have enough of this, it is not funded well enough, and by and large the general public is right in that we do need more of this. It’s not exciting to do at all. On the other hand, meta-analysis papers which pull results from a large number of papers looking at a particular topic usually give a helpful benchmark on the broad direction and general take-away conclusions of a particular topic.
In this topic about IRBs, A/B UI/UX testing for the set-up that Reddit did it, and being run out of an university setting? That’s hyperbole. I don’t like businesses doing aggressive user-focused testing without informing the user, particularly with UI/UX changes I dislike (looking at you too Twitch with your constant layout changes), but at the end of the day these kinds of testing generally don’t ever rise up to the threshold needed to be a particularly meaningful blip. Insinuating otherwise vastly mischaracterize how research is done in formal, structured settings.