Twitter Claims It is Cracking Down on Hate Speech. But is All Hate Speech Treated Equally?

23 July 2019

Country: Global

by: Giulia Dessi

Screen_Shot_2019-07-23_at_11.54.24_AMOver the past few months, major social media platforms such as Facebook and Twitter have been under increasing pressure to monitor and remove hateful, potentially dangerous content from their platforms. Many have taken this criticism into account; Facebook recently announced that it would treat all white supremacy messages with the same vigilance as terrorism content, while Twitter amended its hate speech policies to include anything that dehumanizes a group of people, with specific attention to anti-religious hatred.

However, while both have taken positive steps in the right direction, how hate speech is assessed on each platform remains unclear, making it difficult to monitor whether or not policy changes are having an effect. It is nearly impossible to assess whether or not this effect is equal across regions and languages, meaning that hateful content could be more vigilantly policed in some countries than in others.“Our staff is a global team to provide 24/7 coverage in multiple locations,” says Stephen Turner, the Head of Public Policy, Government and Philanthropy at Twitter’s Brussels office. He goes on to say that Twitter’s staff works alongside local civil society organizations to gather feedback and identify culturally-specific issues to inform content moderation, ensuring that moderation tactics are tailored to the country’s context.

“This staff is specialized and has language and cultural skills/knowledge across the EU and many markets,” he continues.

But is language and a vaguely-defined “cultural knowledge” enough to curb the impact of hate speech around the world? According to Media Diversity Institute’s data from our Get the Trolls Out project shows that while 21 percent of all English language content that was reported as hateful to Twitter was removed, only 14 percent of similar content in French, Flemish, Greek and German was taken down.

While this could point towards a large discrepancy in how English-language content is moderated compared to other languages, it is impossible to know without transparent moderation policies that are equally employed across regions and languages.

“Our biggest problem with social media companies is that they don’t really let us know how they train their people and what exactly their internal rules of moderation are,” says Tamas Berecz, Senior Researcher at the International Network Against Cyber Hate (INACH). Without consistent, detailed and nuanced policies, moderators run the risk of both deleting harmless speech mistaken for hate speech, and leaving harmful speech online. What is worse—and even more challenging for both moderators and social media companies alike—is that the technicalities of hate speech are constantly evolving, meaning that moderators need to be both consistent and up-to-date with the latest jargon and coded language of the most dangerous kinds of offenders in order to be effective.

Another challenge is the varying definitions of what constitutes hate speech, or hateful content. While Twitter’s new policy rightfully acknowledges the myriad ways that anti-religious content can be dehumanising, freedom of expression advocates have criticized the policy claiming that a broad policy on hate speech opens the door to censorship. Of course, this is also the argument of many of the worst purveyors of hate speech, making it murky territory for those who want to defend both freedom of expression and social media users’ rights not to be discriminated against or pummeled with hateful content against their will.

Naturally, this leaves social media companies in a difficult position. Making social media websites like Twitter and Facebook into “free speech free-for-alls” has had consequences, and now these platforms are on the frontlines of what it means to democratize the media, and the consequences of making a platform that is accessible to all.

While Twitter has made a conscious effort to be a part of initiatives like the European Commission’s “Code of Conduct on Countering Illegal Hate Speech Online,” the EC’s data shows that the company is still reluctant to remove hateful content. While the combined average for the participating social media companies was a removal rate of 72 percent of all reported content, Twitter lagged far behind at only 43.5 percent removal of all content reported as hateful. Is it lagging behind in content moderation or are there more loopholes for hateful content to get a free pass? Without consistent, and transparent data about moderation policies and protocols, we may never know—a disturbing thought for an issue that clearly needs all hands on deck.

As media monitors with an eye on hate speech, we want to help platforms like Twitter move forward into being a more fair, and welcoming platform, but it is impossible to assess their progress without greater transparency about how their moderators are trained, targets that are set and the type of content that is taken down, and the nature of accounts which are suspended. While our preliminary research points towards broad inconsistencies in moderation policies across different languages and regions, It is difficult to accurately inform effective policy decisions or report the current status of how tech companies are tackling the issue of dangerous speech without detailed, accurate, and transparent data.

Until this data is available, it will be impossible to assess whether Twitter’s new policies are effectively curbing hate speech, or all talk and no action.