Does the UK’s draft Online Safety Bill threaten free speech and diversity?

Because content moderation requires many thousands of human moderators the threat of fines or criminal prosecution may lead to online platforms further entrenching their use of algorithm-driven automated content moderation systems, which lack transparency about how they operate.

By Luc Steinberg

Content moderation is difficult to do. Particularly at scale, with Facebook now claiming over 2.2 billion users and 500 hours of video being uploaded every minute on YouTube. Preventing all of the worst content from being shared is virtually impossible, let alone determining which content falls into grey areas. The UK’s Online Safety draft bill, published May 12, seeks to make the UK the “safest place in the world to be online” by aiming to eliminate child sex abuse material, terrorist content and content deemed ‘legal but harmful’. But the form the bill takes now may come at the cost of freedom of expression. 

The bill invokes a ‘duty of care’ for online platforms where the UK’s communications regulator, Ofcom, will be granted the power to block access to sites and fine companies which do not protect users from harmful content up to £18m, or 10% of annual global turnover. Additionally, managers of intermediaries like Facebook and Google could receive criminal sentences for failing to remove content deemed to be harmful. Some even feel that certain proposals in the legislation would give the Culture Secretary powers that could undermine Ofcom’s independence as a regulator

Some academics, digital rights groups and others have found other faults in the draft bill, including the potential for the Bill to weaken or break end-to-end encryption, or the Bill’s limited approach to the promotion of media literacy. However, one complaint has echoed consistently: In order to avoid the imposition of fines or criminal liability for platform managers, intermediaries will probably engage in inconsistent and overly-moderated online speech, thus chilling freedom of expression. 

How does this affect media diversity?

One problematic aspect is that ‘legal but harmful’ is classified as content that could have “significant adverse physical or psychological impact” on users. This is an overly broad definition and a serious cause of concern for freedom of expression. Especially given that private companies, almost all of them from Silicon Valley, will be compelled to censor content the government finds to be in breach of the standard.

We already see instances where content that is deemed harmful is disproportionately and arbitrarily applied to marginalised groups online.

Members of the LGBTQ+ community, for instance, have already reached out to government through a signed open letter asking them to reform the Bill’s duty of care principle which they say would give Internet companies powers to delete posts that cause ‘harm’. The group fears that without defining what harm is, then social media operators will unduly censor LGBTQ+ content.

This cause for concern is not unfounded. Not only is anti-LGBTQ+ harassment and hate speech often left unmoderated, but content from the community is also often haphazardly censored without recourse. In one such case, Salty, a donation-based newsletter for women, trans, and nonbinary people, had their Instagram ads rejected for being an escort service. In another example, TikTok repeatedly removed content that depicted two men kissing or holding hands.  

Because content moderation requires many thousands of human moderators, the threat of fines or criminal prosecution may also lead to online platforms further entrenching their use of algorithm-driven automated content moderation systems, which lack transparency about how they operate. Researchers have demonstrated time and time again instances where automated content moderation practices have been inconsistently or arbitrarily applied, most often censoring ethnic, religious, LGBTQ+ people and other marginalised groups.

The algorithms that social media platforms employ tend to be bad at detecting hate speech or determining the context of users’ posts. Humans can easily misunderstand irony, satire, critique, forms of provocative comedy, as well as culturally specific language, let alone algorithms. The algorithms used to moderate content online regularly reflect the biases of the people that created them by. One study found that the AI models that process hate speech are one-and-a-half times more likely to label African American tweets as offensive as the tweets of others.

The Internet is an important space for people to express themselves freely and it needs to be protected from measures that could shrink that space. It has opened the world to forms of expression and reach that were once mainly the domain of journalists and politicians. Everyone has the right to feel safe online, but shrinking the public sphere will only cause harm, especially to vulnerable groups. The Internet needs to remain a place for free expression and diversity of opinion, deliberation and discussion for the good of democratic values and society.