“Community Notes” – Opportunity or Risk? 

By Dieter Brockmeyer 

Dealing with fake news and misinformation on the internet remains a major challenge. Social media platforms like X (formerly Twitter) have opted to rely on “Community Notes” to address the issue. 

In Europe, the reaction was clear and largely negative when Meta, the parent company of Facebook, announced in January that it would begin phasing out traditional fact-checking in the U.S. in favour of a system similar to X’s “Community Notes,” using internal mechanisms instead of independent fact-checkers. 

Many interpreted this move as aligning with the new U.S. administration under President Trump and saw it as a direct threat to democracy and a potential accelerator for the unchecked spread of fake news. But is that necessarily the case? 

To answer that, we first need to look at what Community Notes are. The concept relies on the collective intelligence of platform users: the more people contribute, the higher the score and the greater the likelihood that the flagged post reflects the truth. The theory of collective intelligence is well-documented and scientifically validated. In principle, there are already signs that it can work on social media, even in Europe. For instance, take a look at an AI-generated Facebook post depicting a fantastical image labelled as the Eiffel Tower. These kinds of posts are primarily designed to generate clicks. The comments are often filled with users calling it fake. Naturally, there are always some who don’t question its authenticity and express excitement. But they tend to be in the minority. 

Still, these types of community reactions rarely have a real impact. That’s because while users may comment, they rarely read each other’s responses, meaning the voices calling out misinformation are often overlooked. The situation would be different if the feedback were synthesised into a factual summary – essentially an official badge indicating a post’s truthfulness. 

That such a system can work is supported by several serious studies. One of the first, conducted by the Gies College of Business at the University of Illinois, was published as a white paper under the title “Can Crowdchecking Curb Misinformation? Evidence from Community Notes.”. It found that crowdchecking is a viable approach to reduce the amount of misinformation on social media platforms. Yet the concept remains controversial, largely due to its origins. X (formerly Twitter) introduced Community Notes during a period of upheaval shortly after Elon Musk’s takeover, at a time when scepticism around the company’s direction was already high. The abolition of traditional fact-checking only added fuel to the fire. Facebook’s subsequent announcement to adopt the same model was seen by many as a concession to the new U.S. administration. 

So, does that mean Community Notes are harmless and even a viable tool for verifying content on social media? That largely depends on how the system is implemented and managed. The potential for manipulation exists here, too. Traditional fact-checking has its issues as well: for instance, there have been numerous reports of Facebook posts featuring classic artworks being blocked because the algorithm mistakenly flagged them as pornographic. Algorithmic training can reduce such errors, but rarely eliminates them. Manual fact-checking also faces challenges: it’s resource-intensive and can only be done selectively, sometimes superficially. This, too, results in a high error rate. And just like automated systems, manual checks can be subject to influence from interest groups, whether driven by corporate agendas or governmental goals. 

Theoretically, a system like Community Notes, grounded in the wisdom of crowds, should be less susceptible to such interference. Unfortunately, that’s only true to an extent. Much depends on who is permitted to participate in the rating process. Mandatory registration, for example, could be used to control the quality of contributions and thus manipulate outcomes. Another vulnerability lies in the evaluation algorithm itself, which could be (intentionally or unintentionally) biased or flawed. Even here, the results would be questionable. 

A simpler solution might be to analyse the comments that are already being left on posts. But that isn’t foolproof either. A conspiracy-laden post that’s only shared within its own echo chamber will receive no pushback from that bubble. It’s only when such content reaches beyond its core audience that it becomes subject to scrutiny. 

Ultimately, regulation will be unavoidable. Because social media platforms operate globally, this can’t be done without international coordination – something that has historically proven to be extremely difficult, and is unlikely to get any easier given current geopolitical tensions. At the very least, we need clear rules about who can contribute to Community Notes and how the underlying algorithm evaluates input. Ideally, these discussions would take place within a global framework – perhaps under the auspices of the United Nations. 

Right now, though, the prospects of establishing such a system are bleak. Quite apart from the fact that such an outcome would hardly be in the public interest, it’s also a rather unlikely scenario—at least for now and the foreseeable future. The growing influence of China and rising tensions with the United States are leading to increasingly divergent interests among platform operators, despite their shared commercial orientation. 

The most likely scenario at present seems to be that everyone is trying to assert their interests against one another, be it nation-states or corporate conglomerates. The outcome is entirely uncertain at this point, and the prospects for an open society look bleak.