Big Tech Needs to Go Beyond Volunteer Content Moderation to Prevent Online Abuse

Online tech's lackluster track record on the issue of diversity could have major ramifications if racism continues to go unchecked. Companies need to put dollars behind moderating so diverse users can feel safe.

By Michael Gaskins

Racism, trolling, cyber-bullying, and misinformation on social media are all nothing new. Real-time text conversations are vital to all live streaming platforms. Moderators play a significant role in being the first line of defense in catching any form of hate speech within those communities. Moderators also work hard to attract users, spark online discussion, and stimulate participation by curating and building content. Many of these moderators volunteer their time to ensure that they create, support, and control public discourse for millions of people. These moderators’ free labor can be staggering if you add actual compensation. Two new reports by Northwestern University computer scientists recently discovered that the cost of free labor work done by these moderators in 2019 was an estimated $3.4 million per year for the American social news aggregation, content rating, and discussion website Reddit.

Reddit touts itself as a network of communities where people can dive into their interests, hobbies, and passions. Thousands of communities on Reddit are devoted to a wide range of topics, including humor, news, art, video games, and memes. Volunteers monitor the subreddits, which Reddit hosts and makes available to users for free, in what might seem like a fair exchange since they care about and are interested in maintaining vibrant online communities.

Reddit has a very interesting relationship with its volunteer moderators. In July 2015, the website was effectively offline for millions of subscribers after the moderators went on strike and disabled more than 2,000 community subreddits on the site. This led to questions about the moderators’ legitimacy as the digital platforms’ responsibilities towards their volunteers were raised. During the “Reddit Blackout,” the company lost advertising revenue and had to negotiate over working conditions for moderators. Eventually, the company hired its first Chief Technical Officer to assist the moderators by creating new technology to improve the platform’s moderation software.

Once this latest story on the 2019 Reddit revenue became public, many Redditors voiced their opinions on volunteer moderators and pay.

Redditor TheSinningRobot says, “I think what a lot of people are missing here that is kind of a bigger deal is that if Reddit suddenly needed to fill that gap, at 170,000 hours of moderation, divided by a normal full time job, it would take close to 100 full time employees to bridge that gap.”

While 11thDimensionalRandy claims, “the average moderator isn’t really producing much value at all, but there are plenty that are. The thing is, a lot of those probably do get paid in some form or another, but internet moderator culture is such a mix of petty tyrants within ingroups with clout-obsessed power users coexisting with people who just like having a community of people with similar interests that it’s hard to gauge what exactly is going on from the outside.”

Meanwhile, some say people can and have turned moderating into a good business venture. Redditor tewmtoo says, “smart moderators have turned that into a very lucrative business for themselves in the past. I believe it still happens but can’t be sure.”

frogjg2003 says the report might be a little misleading. He says, “I’m a moderator for a sub I created that has absolutely no activity. I spend 0 hours a year on moderation. I’m included in that statistic. I shouldn’t be.”

Others have their doubts about moderators volunteering for free. Redditor saun-ders says, “What makes you think they’re unpaid? Where I’m from, our conservative party pays the moderators of local social media groups plenty well enough to ensure that certain political viewpoints get culled.” And goddamnmike stated his frustration by saying, “Chump change for a six-billion-dollar company.”

Additionally, moderators must balance emotional, physical, and social labor while dealing with harmful content and interpersonal relationships with streamers and viewers. An example of harmful content was exposed in a 2020 report by the Professional Footballers’ Association, which revealed that 43% of Premier League players had faced targeted and explicit racist abuse, with 29% of it coming in the form of emojis. It is becoming more challenging to review videos because moderators are now reviewing clips that are not just depicting violence, indecency, or other harmful content. In addition, moderators now have to contend with deep fakes videos. These videos involve digitally splicing an individual’s face or body into a video, making them appear to do or say things they didn’t do.

There is an increase in hate speech online, and minority groups are most likely to be affected. Social media corporate officials say they place a high priority on finding and removing hate speech content. This is where moderators usually step in and do the job. Sadly, there are many others who view this kind of monitoring as a form of censorship, punishment and cancel culture. Diverse communities will find it difficult to continue to have a voice as intimidation and bigotry continue.

Now, Governments are taking drastic measures to deal with social media content flagging. Nigerians were blocked from accessing Twitter from June 2021 to January 2022 when their government announced the indefinite suspension of the platform after Twitter removed a tweet from President Buhari’s account, saying the tweet violated company policies. Nigerian authorities went as far as to threaten to prosecute anyone who bypassed the ban. 

Meanwhile, a widely shared petition calls for social media accounts to require a verified ID to prevent anonymity from being weaponized. Although theoretically, this would be a good solution, it could harm the people it’s supposed to help. If people are required to use government-issued identification to open a social account, large groups of people may become alienated. This approach could severely hurt online diversity.

Content moderation remains a mystery, even among tech companies, despite increased public awareness. In most companies, human moderators and artificial intelligence-based systems review content. Modifying content will require the collaboration of organizations from different socioeconomic sectors. Facebook and Twitter, for example, could address social issues together through mutually agreed-upon principles and guidelines.

Currently, some moderators are okay with volunteering and are looking at the bigger picture. Redditor, EstroJen says, “My payment is knowing people have a good time. But could I get some of that 3.4mil?” XenogenderCensored knows, “being a moderator of social media gives you far more power in a democracy than having a right to vote. You have a huge role in seeing what stories people are permitted to see and shaping public opinion.”

In the end, online tech’s lackluster track record on the issue of diversity could have major ramifications if racism continues to go unchecked. Companies need to put dollars behind moderating so diverse users can feel safe. Unfortunately, at this moment, when social media platforms moderate content, it still appears that their bottom line is the most crucial factor.


Photo Credits: Lenka Horavova / Shutterstock