Meta’s decision to scale back its fact-checking efforts in the U.S. marks a significant shift in its approach to content moderation and its role in combating misinformation. This move raises concerns about the potential erosion of media diversity and accountability, especially considering the company’s broader strategy around free expression.
In his statement, Mark Zuckerberg emphasised Meta’s renewed focus on “free expression” across its platforms, saying, “I want to make sure that we handle responsibly… but the problem with complex systems is they make mistakes even if they accidentally censor just 1% of posts. That’s millions of people.” While the goal of reducing mistakes in content moderation is understandable, the decision to remove professional fact-checkers in favour of a community-driven model raises serious questions about the integrity of information circulating online.
You can watch the statement here.
The risk of community-driven fact-checking
Zuckerberg argued that “the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US,” and that Meta would now replace them with “community notes” that rely on user consensus. However, history has shown that community-driven models are not always effective. The platform’s failure to detect coordinated disinformation campaigns surrounding Brexit in the UK is a prime example of how user-driven oversight can fall short in combating harmful content.
Misformation is more likely to flourish without professional fact-checking, leading to real-world consequences. Zuckerberg acknowledged the challenge of balancing free expression with content moderation, noting that “we’re going to dramatically reduce the amount of censorship on our platforms,” which he hopes will lead to a more open exchange of ideas. Yet, this approach could inadvertently open the door for the proliferation of harmful content, including misinformation that fuels societal divisions and marginalises vulnerable groups.
Real-world consequences of unchecked misinformation
The risks are tangible. Misinformation can lead to violence, as demonstrated by the recent unrest in Southport in the UK, where false claims spread through social media contributed to real-world violence. The situation in Ukraine and the Gaza war, both of which have been marked by the spread of propaganda, highlight the dangers of unchecked disinformation in global conflicts. In these instances, a lack of rigorous fact-checking has real-world consequences, distorting public perception and exacerbating tensions.
Implication on media diversity
Meta’s decision is also problematic from a media diversity standpoint. Marginalised communities, already vulnerable to misrepresentation, could suffer further if platforms do not prioritise accurate and inclusive content. As Zuckerberg himself noted, Meta is aiming to “get back to our roots” of free expression, but this could come at the expense of those who are most at risk of being misrepresented or marginalised in the media. Without structured fact-checking, the space for equitable narratives shrinks, and harmful stereotypes are more likely to thrive unchecked.
Zuckerberg also indicated that Meta would relax restrictions on sensitive topics like immigration and gender identity, aiming to align with “mainstream discourse.” While this might appeal to users advocating for less regulation, it risks amplifying polarising narratives and misinformation. He acknowledged, “What started as a movement to be more inclusive has increasingly been used to shut down opinions… it’s gone too far.” This shift threatens to undermine the platform’s role in fostering inclusive discussions, opening the door for harmful content to thrive in the absence of professional oversight.
Contradicting global regulation efforts
The decision to abandon professional fact-checking comes at a time when regulatory frameworks around the world are increasingly pushing tech companies to take more responsibility for the content they host. In Europe and the UK, governments are introducing legislation that mandates stricter content moderation practices. By retreating from fact-checking, Meta’s actions seem at odds with these global efforts to ensure greater accountability and responsibility from tech platforms.
Zuckerberg has stated that he intends to collaborate with President-elect Donald Trump to fight against global pressures for “censorship”. He asserts these pressures are growing across regions, particularly in Europe with its “ever-increasing number of laws, institutionalising censorship”. Secret courts in Latin America that can “order companies to quietly remove content” and Chinese censorship, as he says, “China has censored our apps from even working in the country.”
He further claims that the only solution is for Meta to “push back on this global trend with the support of the US government, and that’s why it’s been so difficult over the past four years when even the US government has pushed for censorship.”
Meta’s decision to scale back its fact-checking infrastructure and place more trust in user communities threatens the accuracy and diversity of the information circulating on its platforms. Freedom of speech is crucial, but at what cost? As Zuckerberg says, it is essential to “reduce mistakes, simplify our systems,” and focus on “giving people a voice.” However, this cannot come at the cost of accountability, accuracy, and the protection of marginalised voices. Without professional oversight, misinformation will continue to spread, exacerbating social divides and undermining the integrity of media representation.