Fact check: Are X's community notes fueling misinformation?
August 5, 2025
On July 9, the US government sanctioned United Nations Human Rights Council special rapporteur Francesca Albanese for what the US Secretary of State Marco Rubio said was a "campaign of political and economic warfare against the United States."
Albanese has consistently denounced Israel's actions in Gaza since its offensive against the Palestinian militant group Hamas began in October 2023, as well as the Trump administration's efforts to suppress dissenting voices critical of Israel.
The announcement was rejected by the UN, which called for a reversal of the sanctions, and it also prompted a debate online, where Albanese's name began to trend on X (formerly Twitter).
Posts poured in both defending and criticizing her work, accompanied in several cases by "Community Notes," X's signature tool to fight misinformation. The notes, which are essentially brief clarifications or extra context attached to posts, can be submitted by anyone.
X claims it uses what it calls a "bridging algorithm" to prevent bias, lending more weight to upvotes from users with historically different viewpoints and thus theoretically reducing the chance that a single group can dominate the narrative.
But that doesn't make them immune from error. In the case of Albanese, for instance, one community note claimed that "Francesca Albanese is not a lawyer," amplifying arguments by her critics about her qualifications and "ethical conduct."
While Albanese did admit in an interview with Vanity Fair that she didn't take the bar exam, which would have qualified her as a practising attorney, she did study law. Her official profile on the website of the UN Office of the High Commissioner for Human Rights (OHCHR) describes her as an "international lawyer" who has authored publications on International Law.
What this example shows is that while community notes can be a valuable tool to reduce the spread of disinformation, they are not always accurate and often fail to paint the whole picture.
Notes are meant to be a system where users collaboratively add context and verify facts. Research from Cornell University has shown that notes on inaccurate posts on X help to reduce reposts and increase the likelihood that the original author deletes the post.
However, according to an analysis of X data by NBC News, the number of community notes being published are declining in number, and DW Fact check spotted several examples of the tool misleading users instead of helping them spot falsehoods.
Misleading community notes slipping through
In July 2025, a post by Sky News quoting the United Kingdom's Metropolitan Police chief went viral, accumulating over 4.7 million views. The post linked to a Sky News article based on an interview with the police chief, which highlighted structural inequality, noting it was "shameful" that black boys in London were statistically more likely to die young than white boys.
The community note was then added; however, it was reframed, stating:
"The headline lacks the essential context that despite making up only 13% of London's total population, Black Londoners account for 45% of London's knife murder victims, 61% of knife murder perpetrators, and 53% of knife crime perpetrators."
While factually correct, the note introduced unrelated crime statistics from 2022 — subtly shifting the focus from systemic inequality to framing black boys as perpetrators of crime. Instead of clarifying the issue, the note distorted the original message, misleading users who hadn't actually clicked on the link in the post.
Community notes and elections
Another problem was spotted by experts during the 2024 US Presidential elections.
Researchers Alexios Mantzarlis and Alex Mahadevan from the Florida-based Poynter Institute analyzed community notes posted on Election Day. Their goal was to assess whether community notes were helping counter election misinformation or not.
Their findings raised concerns. Out of all fact-checkable posts analyzed, only 29% carried a community note rated as "helpful." In X's system, a note is rated "helpful" when it is upvoted by a diverse group of contributors and prioritized for public display.
But of these "helpful" notes, only 67% actually addressed content that was fact-checkable. In other words, nearly a third of the notes that appeared as helpful were attached to posts that didn't contain factual claims at all.
The researchers saw this as a problem of low precision and recall: too few misleading posts were getting corrected, and even when notes appeared, many weren't targeting actual misinformation.
As Poynter noted, "This is not the kind of precision and recall figures that typically get a product shipped at a Big Tech platform."
Meanwhile, Germany's Alexander von Humboldt Institut für Internet und Gesellschaft, a research institute based in Berlin analyzed nearly 9,000 community notes in the run-up to the country's federal elections in February this year, and found that "community notes follow political patterns."
The institute said, "Users who write notes are not free of political views. Their assessments and comments may therefore be influenced by their own interests or ideological biases."
Poynter's Mahadevan explained in an interview with DW's fact-checking team how people may be gaming the system: when someone new joins Community Notes, X assumes they're unbiased because they haven't rated many notes yet.
"Bad actors and troll farms have figured out you can flood the system with new accounts to upvote certain viewpoints and get those notes published," says Mahadevan.
Misinformation spreads faster than community notes
Another potential problem with notes is speed. A study by the Digital Democracy Institute of the Americas (DDIA), which analyzed over 1.76 million notes across 55 languages from January 2021 to March 2025, found that while publication speed has improved — from over 100 days in 2022 to an average of 14 days in 2025 — that's still too slow to help stop the spread of disinformation. Falsehoods spread within hours, not days.
For example, a November 2023 Bloomberg analysis during the Israel-Hamas war found that relevant community notes took over seven hours to appear, with some taking as long as 70 hours. In breaking news situations, where users are looking for quick and reliable information online, this might be too long.
Are community notes becoming a numbers game?
Community notes do help clarify misinformation; however, sometimes, they mislead. But their overall usage is declining, and that's where X's strategy can come into question.
According to Grok, X's AI chatbot, in spite of a decline in community notes, over 1 million users were contributing to Community Notes by May 2025.
But Mahadevan calls this a façade, pointing out two opposing trends: contributor numbers are rising, but the number of published notes is falling. He describes X's approach as a kind of "Ponzi scheme" constantly adding new users to keep the system active, while the actual fact-checking effectiveness declines.
Need for more safeguards
With platforms like Meta and TikTok now copying X's community notes model, experts warn that the risks are growing. At its core, community notes depend on ordinary users having the skills to fact-check complex claims.
Mahadevan puts it bluntly: "We live in a very media-illiterate society, and people have a tough time determining what's a trustworthy source."
Therefore, without sufficient safeguards, such systems risk amplifying misinformation instead of stopping it.
Thomas Sparrow contributed to this fact check.
Edited by: Matt Ford