The End Of Fact Checking Increases The Dangers Of Social Media
“Community notes haven’t been very effective,” as a form of fact-checking on social media, says Bhaskar Chakravorti, dean of global business at The Fletcher School. “One problem is that the community interventions often happen too slowly and miss the window when a problematic post is most viral.” Photo: Adobe Stock More disinformation and “toxic material” is likely on platforms, Fletcher School professor argues Meta’s recent announcement that it will discontinue use of third-party fact-checkers on platforms like Facebook and Instagram in the United States has sparked fears of a new era of disinformation on social media. Meta is switching to a “community notes” model like that used on the social platform X, which could lead to “an increase in toxic material,” says Bhaskar Chakravorti, dean of global business at The... Here, Chakravorti discusses how the elimination of fact-checking may affect our social media feeds.
There’s absolutely no question that it’s been effective in labelling and stopping some egregious disinformation; but it is far from perfect because there are so many ways in which problematic content is created and... Meta’s recent decision to end its third-party fact-checking program has reignited fears about misinformation’s growing influence. The move reflects a broader trend among social media giants: stepping away from content moderation in favor of engagement-driven algorithms. As these companies strip away safeguards, journalism faces an existential crisis—one where facts struggle to compete with viral falsehoods. This has sweeping implications worldwide as increasing polarization and authoritarian rule result. Social media platforms have become the world’s most powerful distributors of news, shaping public opinion at an unprecedented scale.
But rather than prioritizing accuracy, these platforms are engineered to maximize engagement. Sensationalism, controversy, and misinformation outperform factual reporting, eroding trust in journalism and deepening societal divisions. Meta’s fact-checking initiative was once seen as a modest attempt to combat misinformation. Under the program, independent fact-checkers flagged misleading content, reducing its reach and adding context to prevent the spread of falsehoods. But earlier this year, Meta scrapped the initiative, citing concerns over “political bias” in fact-checking organizations and “too much censorship.” CEO Mark Zuckerberg framed the decision as a step toward “more speech, fewer mistakes,”... In fact, PolitiFact and FactCheck.org, both of which previously collaborated with Meta, have dismissed claims of liberal bias.
They emphasize that they never had the power to remove content or censor posts, reiterating that Meta has always retained ultimate control over content moderation decisions. Community notes, modeled after X’s (formerly Twitter’s) user-driven fact-checking system, allows users to append additional context to posts they find misleading. Proponents claim this method democratizes fact-checking, but critics argue it’s an ineffective and unreliable replacement. A study by Alexios Mantzarlis and Alex Mahadevan on the 2024 U.S. election concluded that "At the broadest level, it's hard to call the program a success," noting that only 29% of fact-checkable tweets in their sample carried a helpful note. Providing Online Reputation Solutions for Texans
On Behalf of Sternberg Law Firm | Mar 15, 2025 | Firm News Meta, the company that owns Facebook, Instagram, and WhatsApp, recently announced that it will stop using third-party fact checkers on all of their platforms. Mark Zuckerberg, Meta’s Chief Executive believes the fact checking systems that were put in place in 2016 have made too many mistakes, leading to an infringement upon free speech. However, this change is leading to increased fears surrounding the dangers of disinformation which has been rampant on social media in recent years. Due to considerable public pressure, Meta began using outside organizations such as The Associated Press, ABC News, and Snopes, along with other global organizations, to comb through potentially false or misleading posts. These organizations could then rule whether or not posts needed to be annotated or removed.
Nearly 100 organizations working in more than 60 languages globally were a part of Meta’s fact-checking program. From 2016-2024, Meta spent billions of dollars to fix content moderation issues, but Zuckerberg reports that he grew frustrated as an increasing number of people voiced their complaints about the fact-checking program, often saying... Meta CEO Mark Zuckerberg’s announcement that the social media company’s fact-checking programs will end raises concerns about how easily misinformation will spread through its platforms. Zuckerberg has stood firm, contending that shifting to a crowdsourced fact-checking method will make user experiences better on Facebook, Instagram, and Threads. Virginia Tech communications experts Megan Duncan and Cayce Myers and digital literacy expert Julia Feerrar offer insights as to how effective crowdsourced fact-checking could be, what would motivate Meta to make these changes, and... “The move by Zuckerberg, which copies X’s Community Notes approach to checking misinformation, is a move toward a populist social media approach to truth.
It adopts the view that what average people agree on is the truth, privileging that over the view of experts who have closely studied the issue. There are many dedicated journalists who care about fairly presenting the facts, and because of Zuckerberg's decision some journalists may lose their jobs,” Duncan said. “Research I’ve done found that the type of crowdsourcing done for social media Community Notes is vulnerable to political bias. Under study conditions that closely resembled the function of social media community notes, when participants were presented with a choice whether or not to contribute, only those with the most extreme opinions chose to... This means that in a Community Notes program where politically ambivalent audiences aren’t required to participate, the results of crowdsourced credibility labels are politically skewed,” she said. “The process is also much slower than it takes to spread misinformation, allowing an idea that is contradicted by all evidence to influence public opinion,” Duncan said.
Meta’s decision to reduce content controls amplifies the most damaging effects of its platforms, as users are left without the necessary tools to combat harmful misinformation The purchase of Twitter by Elon Musk, the close ally of future U.S. president Donald Trump, transformed the platform, which Musk rebranded as X, into a lawless jungle in the name of supposed freedom of expression. A study conducted by the City St. George’s School of Science and Technology at the University of London, covering nine countries, found that in just two years, X has become the hub of political abuse and misuse, where adversaries, dissenters, and... Meta platforms (Facebook, Instagram, and Threads) are following suit by ending its third-party fact-checking program and easing content moderation.
“The consequences of these decisions will be an increase in harassment, hate speech, and other harmful behaviors across platforms with billions of users,” warns Alexios Mantzarlis, director of Security, Trust, and Protection in the... Few organizations defend Meta’s decision. Mantzarlis, who was involved in the international fact-checking network, emphasizes that shift taken by Mark Zuckerberg’s company is twofold: not only has Meta stopped verifying data to identify falsehoods and shifted control to users,... This was confirmed by Joel Kaplan, Meta’s new director of global affairs: “We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent... It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.” Mantzarlis is highly critical of the move: “In addition to ending the fact-checking program, Zuckerberg has also announced a more lax approach to content moderation, so Meta will no longer proactively seek out potentially...
Mark Zuckerberg recently announced that Meta, the parent company of Facebook, Instagram, and other services, will no longer fact-check social media content on its platforms. Instead, Meta will use a crowd-sourced system like X’s Community Notes—where approved users can add notes to posts to add context or correct mis- and disinformation. At the same time as the announcement, disinformation about wildfires in Los Angeles ran rampant on social media. It isn’t just natural disasters and public health crises that are subject to online disinformation—elections are also targeted. What happens on widely-used social media platforms matters, and it’s worth asking what effect it will have, immediately and in the longer term, for Meta to cast aside its modest fact-checking efforts. According to Sander van der Linden, a social psychologist at the University of Cambridge who advised Facebook on its fact-checking program in 2022, “Studies provide very consistent evidence that fact-checking does at least partially...
A 2019 meta-analysis that compared results of fact-checking across 30 individual studies found that while it has a positive overall influence on political beliefs, fact-checking’s effectiveness depends on a person’s preexisting ideology, beliefs, and... Overall, the science of fact-checking is complicated, but says that it can be effective in combatting the spread of misinformation. What about community notes? Are they effective? A 2024 study analyzed X’s Community Notes program to see whether it lowered engagement with misinformation on the social media platform. While researchers did observe an increase in the number of fact-checks through Community Notes, they didn’t find any evidence that Community Notes significantly reduced engagement with misleading posts.
The study concluded that Community Notes “might be too slow to effectively reduce engagement with misinformation in the early (and most viral) stage of diffusion.” Another study of X’s Community Notes found that they... While it is clear that mis- and disinformation interventions on social media platforms need improvement, abandoning fact-checking for Community Notes doesn’t seem to be the answer. Political actors like President Trump and X owner Elon Musk have attacked social media fact-checking programs, claiming that these programs are biased and suppress free speech. Coming just ahead of President Trump’s inauguration, Zuckerberg’s announcement has been described as an obvious capitulation to long-time political pressure from Trump and his supporters. After Zuckerberg’s announcement, then-President-elect Donald Trump heralded the decision and said “Meta, Facebook, I think they’ve come a long way.” For his part, Trump agrees that his criticism of social media platforms was “probably”... Meta’s Fact-Checking Demise: A Looming Threat to Truth and Democracy
The social media landscape is bracing for a potential surge in misinformation following Meta’s recent decision to discontinue its fact-checking program in the United States. This move has sparked significant concern among academics and experts, who fear the repercussions could extend far beyond American borders, impacting democratic processes and online discourse worldwide. The decision, announced by Meta, the parent company of Facebook and Instagram, signals a shift away from third-party fact verification, raising alarms about the potential proliferation of false and misleading information across its platforms. While Meta maintains that the change is currently limited to the US market, many believe it is a harbinger of similar moves in other countries, including Canada. Daniel Downes, a professor of communication studies at the University of New Brunswick Saint John, anticipates the eventual adoption of this policy in Canada, citing the rapid pace of policy changes within the tech... He expresses particular concern about the implications for Canadian federal elections, suggesting that the absence of fact-checking mechanisms could exacerbate the spread of both misinformation and disinformation, thereby undermining public discourse and potentially influencing...
Downes foresees a more polarized and less responsible political climate fueled by unchecked false narratives. This could erode trust in democratic institutions and processes, further complicating informed decision-making by voters. Meta’s proposed alternative to professional fact-checking involves a system of "community notes," a user-generated approach intended to provide open-source verification. However, critics argue this method is inadequate and potentially susceptible to manipulation. Erin Steuter, a professor of sociology at Mount Allison University, points to the experience of X (formerly Twitter), which implemented a similar system, arguing that it has failed to cultivate knowledge or foster productive... Instead, she observes, it has become a breeding ground for unproductive arguments and dismissals of opposing viewpoints.
This raises concerns about the effectiveness of user-generated fact-checking and its potential to contribute to further polarization and echo chambers online. While Meta CEO Mark Zuckerberg cites free speech concerns as the rationale behind the decision, Steuter suggests a more complex interplay of factors, including the growing influence of the far-right in the tech industry... This raises questions about the motivations behind the move and the extent to which political considerations may be driving these decisions. The close relationship between tech companies and political actors may be influencing platform policies, shaping the information landscape and impacting public discourse. Fact-checking has long been regarded as a foundational pillar of responsible journalism and online discourse. Traditionally, news agencies, independent watchdogs, and social media platforms have partnered with or employed fact-checkers to verify claims, combat misinformation, and maintain a sense of objective truth.
In recent years, however, rising volumes of digital content, the accelerating spread of falsehoods, and global shifts in how people consume and interpret information have placed unprecedented pressure on these traditional systems. Major social media platforms such as Meta (Facebook), Twitter, and YouTube are moving away from the centrality of fact-checking measures once championed, instead adopting or experimenting with models where user interaction, algorithmic moderation, and... This article offers a detailed examination of the declining prominence of traditional fact-checking. We delve into how misinformation proliferates more quickly than ever, explore the diverse motivations behind platform policy changes, and assess the socio-political ramifications of transferring fact-verification responsibilities onto end-users. By illustrating the opportunities, risks, and ethical dilemmas posed by shifting notions of truth, this piece invites readers to question whether we are truly witnessing the death of fact-checking—or rather its transformation into a... Keyphrases: Decline of Fact-Checking, Digital Truth Management, User-Driven Content Evaluation, Algorithmic Moderation, Misinformation
For several decades, fact-checking was championed as an essential mechanism to uphold journalistic integrity and public trust. Media organizations and emergent digital platforms established fact-checking partnerships to combat the rising tide of misinformation, especially in contexts such as political campaigns and crisis reporting. Governments, activists, and private companies alike recognized that falsehoods disseminated at scale could distort public perception, stoke division, and undermine democratic processes. Yet, the past few years have seen a gradual but significant shift. As data analytics improved, platforms gained clearer insights into the sheer scope of user-generated content—and the near impossibility of verifying every claim in real time. At the same time, increasingly polarized public discourse eroded trust in the very institutions tasked with distinguishing fact from fiction.
People Also Search
- The End of Fact-Checking Increases the Dangers of Social Media
- When The Truth No Longer Matters: How Social Media's ... - Forbes
- Concerns Around the End of Fact Checking on Social Media
- Meta Ends Fact-Checking, Prompting Fears of Misinformation
- Meta ends fact-checking programs, experts discuss motivations ...
- Should Social Media Companies Be Responsible for Fact-Checking Their ...
- Ending fact-checking on social media fuels hate speech and harassment ...
- Meta Ends Fact-Checking, Raising Risks of Disinformation to Democracy
- The Imminent Demise of Fact-Checking on Social Media: An Academic ...
- The Death of Fact-Checking? How Major Platforms are Redefining Truth in ...
“Community Notes Haven’t Been Very Effective,” As A Form Of
“Community notes haven’t been very effective,” as a form of fact-checking on social media, says Bhaskar Chakravorti, dean of global business at The Fletcher School. “One problem is that the community interventions often happen too slowly and miss the window when a problematic post is most viral.” Photo: Adobe Stock More disinformation and “toxic material” is likely on platforms, Fletcher School pr...
There’s Absolutely No Question That It’s Been Effective In Labelling
There’s absolutely no question that it’s been effective in labelling and stopping some egregious disinformation; but it is far from perfect because there are so many ways in which problematic content is created and... Meta’s recent decision to end its third-party fact-checking program has reignited fears about misinformation’s growing influence. The move reflects a broader trend among social media...
But Rather Than Prioritizing Accuracy, These Platforms Are Engineered To
But rather than prioritizing accuracy, these platforms are engineered to maximize engagement. Sensationalism, controversy, and misinformation outperform factual reporting, eroding trust in journalism and deepening societal divisions. Meta’s fact-checking initiative was once seen as a modest attempt to combat misinformation. Under the program, independent fact-checkers flagged misleading content, r...
They Emphasize That They Never Had The Power To Remove
They emphasize that they never had the power to remove content or censor posts, reiterating that Meta has always retained ultimate control over content moderation decisions. Community notes, modeled after X’s (formerly Twitter’s) user-driven fact-checking system, allows users to append additional context to posts they find misleading. Proponents claim this method democratizes fact-checking, but cr...
On Behalf Of Sternberg Law Firm | Mar 15, 2025
On Behalf of Sternberg Law Firm | Mar 15, 2025 | Firm News Meta, the company that owns Facebook, Instagram, and WhatsApp, recently announced that it will stop using third-party fact checkers on all of their platforms. Mark Zuckerberg, Meta’s Chief Executive believes the fact checking systems that were put in place in 2016 have made too many mistakes, leading to an infringement upon free speech. Ho...