You are viewing a single comment's thread from:

RE: Censorship gone awry on Reddit: the aftermath of our r/science AMA

in #science7 years ago (edited)

Automod, etc, is nonsense. Whenever anybody tells you A.I., or any buzzword, or a ``smart'' script censored your comment on a social media platform, be very skeptical. Very skeptical.

Very few of these are fully automated. The technology isn't there yet. (Deep learning, with or without convolution layers or reservoirs, doesn't produce much in the models of objects with which it would interact. This is problematic for a semantic web. Even less so a script or a typical bot.)

A group of people are usually there. There is some automation, true, very likely, but it's never to the extent alleged. Rather they don't want to reveal they disliked your comment/post/content. Because you might take action in response.

The same rationale as for shadowbanning. As opposed to banning.

Organizations can allege ``whoopsie'' and excuse and deny real hostile behavior against those whom they provide a service.

In general, alleging censorship it's part of some automatic procedure has been a trick used by censors in the days of Stalin, to reduce pushback against censorship, as described for example by Abdurakman Avtorhanov (The technology of power, New York: Praeger, 1959). Even mere delays are very effective at breaking the formation of a consensus. Or creating instead another consensus. Timing matters. Indeed timing is the unstable element determining phase change in networks.

Sort:  

Thanks @tibra for your thoughtful reply. While I agree with many of your points regarding censorship generally, I'm not sure how much they apply to this situation.

It doesn't make much sense that Reddit mods would be interested in censoring us. After all, the mods were the ones who invited us to do the AMA. Now there are multiple mods, and while most mods were on-board with our AMA, perhaps a few went rogue and deleted comments.

It seems to me that the automod feature is extraordinarily simple and most likely does not use deep learning or any other type of machine learning. My guess is that it's just a set of human-designed rules. One of those rules is to flag posts with links in them, perhaps with a few more criteria. This rule was likely added to combat spam, since links in Reddit threads are likely a decent way to influence SEO. However, it is a heavy handed solution because it catches both the best (comments with sources) and the worst (link spam) content.

Perhaps a better solution would be if a post is classified as spam for Reddit to use rel="nofollow" for hyperlinks in that post. Therefore, the spammers would see little benefit. Furthermore, once a flagged post received a certain score, the nofollow precaution could be removed. Now I'm not sure whether Reddit's platform provides r/science this flexibility. Back to my point of why it's important to separate out the content layer into a decentralized database (i.e. blockchain) and allow any frontend to be built on top.

Nowadays censorship is becoming a very real problem on the Internet. More and more platforms are taking an aggressive stance against shared content. Reddit makes some very controversial decisions in this regard, as it recently shut down all blacknet dependent subreddits. On the one hand, it's pretty easy to justify Reddit's actions when it comes to removing illegal content from its platform. Judging by his recent statement, the company takes a zero position in relation to drugs, firearms, sexual services, stolen goods, etc. Most people would be happy to accept this approach, as most of these topics should be avoided because of their illegal nature. On the other hand, we must admit that it is censorship in its purest form. Regardless of the illegality of the above-mentioned topics, there is no reason to prevent people from thinking or talking about them. In fact, it is the decision of Reddit will push more people to other platforms which are not actively controlled. Whether or not it was a reasonable decision, understandably, has yet to be determined.

``Perhaps a few went rogue and deleted comments.'' Very likely the case. Too often the case.

Favorite example, it appears a long time ago one graduate student at MIT didn't like Jerry Pournelle's science fiction novels. The student was working part time, moderating Darpanet. So he found an excuse, and banned Jerry from Darpanet. The other administrators apparently accepted the excuse, or even agreed with the ban while mentioning they too didn't like the novels.

You never know you said publicly that offends somebody who possess administrative power. Even minor power. It's genuinely difficult to anticipate what phrase offends some people.

(Some people with a lot of time on their hands are also not all there. I've gotten flagged by flat earth folks with more SP to get my attention, so they can then spam me, blocked by crazy people who first Dm'd me because apparently I didn't reply fast enough to suit their taste, etc. Etc.)

Today such people have an excuse, some script, or a bot, rules. The systems we have for moderation are far too opaque to be able to distinguish mistakes.

Discord has bots that delete links. Everybody else has gotten more sophisticated in their censoring, because it works beautifully to change consensus. And waaaaaay too many people want influence and know it works. Even in obscure subreddits.

In my opinion, Murakami (198Q) has observed correctly that most people want to feel like what they do is important, rather than consider simple reality.

Our brains aren't used to being in cynical mode all the time. When our species evolved there were no communication tools, no images, no artificial sounds, and what we saw, heard, was what there was. Much of our brain operates on naive realism. Therefore even if users know a conversation is being censored or modified, most of them still feel preconsciously, most of the time, that an observed conversion is basically valid. That its meaning has not been substantially or irrecoverably changed. That information has been lost. And that, as Heinlein frequently wrote, intentional omission is the most powerful form of lying.

This can shape opinion. Most individuals moderating know this, either by reflection or trial and error. And long ago, when I was student, in one game theory class, I was even tested on this. It's not a secret, but a social technology that allows more productively working with some mediums, and people use technology. We people are tool users.

``. . . why it's important to separate out the content layer into a decentralized database (i.e. blockchain) and allow any frontend to be built on top.'' Agreed.