• Emilio Singh

Just One Word: Moderation

Societies are a game. A game with rules and as long as we all play by the rules, we all get to have fun with the game. Societies bring things that individuals cannot. Things like social media where we can share memes and generally annoy each other. But what happens when someone break's the rules of the game?


Societies do not function without the rules, and yet, everyone in the society can often get something of personal benefit if they break the common rules we have to all abide by. For real human societies, we get all the people to play by the rules through things like laws with appropriate punishments. But what happens when the society is virtual and the consequences of rule breaking are real?

This post is in response to an excellent article that exposes the trauma and horror of people who do Facebook moderation. You know Facebook? Where your aunt shares minion memes that make you want to die? Well, as it turns out, that is just the surface of the iceberg that is the potential for toxicity that is Facebook moderation.


For the uninitiated, moderation is the task of examining content put on the platform to ensure that content that violates the Terms of Service (ToS) is kept off. In a general sense, moderation (done by moderators) keeps the platform clean of stuff that does not belong there. In the case of Facebook, that means keeping the platform clean of gore, violence, hate speech and even things like child pornography.


Before you read on, I would ask that you read the article I had mentioned for context. My purpose in writing this article is to talk about the failure of the Facebook moderation strategy, and then to propose a number of alternative ways to achieve moderation.

So how does Facebook do their moderation? Well, based on the article, not very well. The kind of moderation strategy they follow is to define a ToS (which is good) but their problem is they believe moderation is a low-skill job. Consequently, Facebook outsources their moderation to companies who are only charged to meet volume deadlines. Therein lies the problem. Because the company is paid by Facebook to moderate by volume, their operations incentivise operating for as little as possible to maximise their profit.


Given that they are expecting people to see some of the worst humanity has to offer, making minimum wage in a moderation pit is an unpleasant experience to say the least. In the mind of Facebook, their long term hope is to replace human moderators with some sort of algorithm. I could of course go into explaining why this is both unreasonable, and unfeasible, but that is for another post.


Rather, the consequence of this belief, is that Facebook views moderation as a burden and not a responsibility of creating a social media platform. Of course, the argument could be made that Facebook only provides the service; humans being awful is not their fault. However, by providing the service as they do, and reaping the real rewards of people using their service, moderation is one of the prices they have to pay to get it.


The Facebook strategy is so problematic because it takes people who are untrained, unprepared and unsupported for the task of moderation and then forces them to work in horrible conditions and as a result, the human toll is exceptionally high. People are burned out of Facebook moderating with mental and physical health issues. The question is what can be done?

The first and most obvious strategy is to view moderation in the same way that we view emergency responders. That is, to view moderation as a high-stress job where the people who do it have to be trained extensively and screened for psychological resilience. These people should operate in larger groups with less frequent hours so that they never get overloaded with the trauma of the job. They should be rotated in on shifts that give them time to recover. They should be supported by mental health professionals in a good and safe working environment.


But this would be expensive. Incredibly expensive in fact relative to the other strategy. Facebook would have to pay more for moderators, pay more to do moderation and pay more in general to provide the same volume of moderation. And let me tell you now, the Facebook shareholders have already started to quake in their gold plated boots.


Obviously, the problem with this strategy is that it would solve the problem, but cost a lot of money. It is, however, only a problem if human life and suffering is less valuable than making money, but since Facebook is the kind of company that would do that, I think it is safe to shelf this idea.

So what would be better? Well a second strategy would be to open the entire community to moderation. Instead of opening the moderation to a specific group of people, everyone on Facebook does moderation of some amount of content. On the surface, this seems like an interesting strategy. Splitting the entire burden of moderation onto everyone on Facebook, say above the age of 18, would mean that because everyone now has some risk of dealing with the toxicity, then maybe fewer people would be toxic, lest they not have the same stomach for the toxicity they spew.


But of course, this solution is also problematic. Just randomly splitting the burden onto everyone in the community means that people will be exposed to things they will not want to see and this will drive them from the platform. No one wants to log on to Facebook to share a meme if it means they need to spend some time removing dog drowning videos.

I think that one of the better ways to do moderation without necessarily driving up the costs of moderation although it would require a different way to think about social media organisation.


The benefit to a community, is that communities are self-governing. Where Facebook gets it right, is by giving the power of communities (often a page or group) to do internal moderating of their content with appointed moderators from inside the community. The community takes the responsibility for its own content, and generally, this protects the community from content they would not want to engage with. Everyone in the community knows the guidelines for operating inside, with some potential for leeway and in general, these things work fine.


The problem, is that Facebook as a whole is not organised as in a community structure. Rather, individuals exist as they are and their content spaces can be filled by other individuals, or by groups acting as individuals. This works perfectly fine with individuals, but the model breaks down in any larger scale.


If people could only belong to content community groups to share content, then every group that they are in could be internally moderated. For example, instead of just being able to freely share content, people must get approval of the content they want to post by other people in the community they belong to. Someone is less likely to post something awful, if they cannot countenance it being seen by someone they know.


Of course, external moderators will still be needed for cases where this community structure fails, but the idea is to reduce the burden of this by relying on extent social bonds that bind us together. That is, to get us to all play by the rules by making sure few of us can flaunt the rule breaking.


7 views0 comments

Recent Posts

See All

©2019 by Socialist Fencer. Proudly created with Wix.com