What Public Discourse Gets Wrong About Misinformation Online
A new study from the Computational Social Science Lab shows that while online misinformation exists, it isn’t as pervasive as pundits and the press suggest. In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content. Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Stevens University Professor Duncan Watts, study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research (Wharton Ph.D. ‘11 and PI in the CSSLab), Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College, and Annenberg alumnus Emily Thorson (Ph.D.
'13) of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all... A broad claim like “it is well known that social media amplifies misinformation and other harmful content,” recently published in The New York Times, might catch people’s attention, but it isn't supported by empirical... Nature volume 630, pages 45–53 (2024)Cite this article The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems...
In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek... In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription What Public Discourse Gets Wrong about Social Media Misinformation The proliferation of misinformation on social media platforms has become a pressing societal concern, sparking heated debates and prompting calls for greater regulation. However, current public discourse often oversimplifies the issue, focusing narrowly on platform accountability while neglecting the complex interplay of factors that contribute to the spread of false or misleading information. This article delves into the nuances of social media misinformation, examining the limitations of prevailing narratives and offering a more comprehensive understanding of the problem. A dominant narrative in the public sphere frames social media platforms as the primary culprits in the misinformation crisis, portraying them as irresponsible actors prioritizing profit over the well-being of their users.
While platforms undoubtedly bear some responsibility for the content hosted on their services, this narrative overlooks the crucial role of individual users in creating and disseminating misinformation. Focusing solely on platform accountability risks neglecting the underlying societal factors that contribute to the susceptibility of individuals to false information. Another common misconception is the belief that misinformation spreads primarily through coordinated disinformation campaigns orchestrated by malicious actors. While such campaigns certainly exist and can have significant impact, research suggests that much of the misinformation circulating online originates from ordinary users inadvertently sharing false or misleading content. This highlights the importance of media literacy and critical thinking skills in combating the spread of misinformation. Furthermore, public discourse often fails to adequately address the diversity of motivations behind the creation and dissemination of misinformation.
While some individuals may intentionally spread false information for political or financial gain, others may do so out of genuine belief or a desire to belong to a particular online community. Understanding these diverse motivations is crucial for developing effective interventions. The Myth of Algorithmic Amplification: Debunking Misconceptions about Social Media’s Role in Spreading Misinformation The narrative surrounding social media’s impact on society is often dominated by claims of algorithmic manipulation and the rampant spread of misinformation. Since the introduction of Facebook’s News Feed in 2006, public discourse has focused on the power of these algorithms to shape our online experiences, culminating in recent concerns about their role in disseminating harmful... This narrative, often fueled by alarming statistics and anecdotal evidence, paints a picture of a digital landscape overrun by extremist ideologies and manipulative algorithms.
However, a closer examination of the existing research reveals a different story, one where the influence of algorithms is often overstated, and the role of individual preferences is paramount. A new study published in Nature, led by researchers at the University of Pennsylvania’s Computational Social Science Lab, challenges the prevailing narrative, arguing that exposure to problematic content online is far less widespread than... Their review of existing behavioral science research indicates that such exposure is largely confined to a small subset of users actively seeking out this type of content. While acknowledging the potential impact of even small amounts of misinformation, the researchers caution against drawing sweeping conclusions based on decontextualized statistics. They argue that the focus should shift from blaming algorithms to understanding the underlying demand for this content. The study highlights the misleading nature of often-cited statistics regarding the reach of misinformation.
While seemingly large numbers, such as the 126 million U.S. Facebook users exposed to Russian troll content before the 2016 election, can be alarming, they often lack crucial context. In this particular case, the Russian content represented a minuscule fraction (0.004%) of the total content consumed by American users. The researchers emphasize that while misinformation can have a significant impact, accurate representation of its prevalence is crucial to avoid exaggerating its role in shaping public opinion. Contrary to popular belief, the study suggests that recommendation algorithms often steer users towards more moderate content, rather than pushing them towards extremist viewpoints. The researchers found that exposure to problematic content is heavily concentrated among individuals with pre-existing extreme views, indicating that algorithms are largely reflecting user demand, not creating it.
The authors argue that algorithms are designed to prioritize user engagement and platform stability, and thus tend to favor mainstream content over fringe ideologies. The field of misinformation is facing several challenges, from attacks on academic freedom to polarizing discourse about the nature and extent of the problem for elections and digital well-being. However, we see this as an inflection point and an opportunity to chart a more informed and contextual research practice. To foster credible research and informed public policy, we argue that research on misinformation should be locally focused, self-reflexive, and interdisciplinary, addressing critical questions about what counts as misinformation and why it does, the... By concentrating on when and how misinformation affects society, instead of whether, the field can provide more precise insights and contribute to productive discussions. College of Information, University of Maryland, USA
Citizen Lab, Munk School of Public Affairs, University of Toronto, Canada School of International Service, American University, USA For almost a decade, the study of misinformation has taken priority among policy circles, political elites, academic institutions, non-profit organizations, and the media. Substantial resources have been dedicated to identifying its effects, how and why it spreads, and how to mitigate its harm. Yet, despite these efforts, it can sometimes feel as if the field is no closer to answering basic questions about misinformation’s real-world impacts, such as its effects on elections or links to extremism and... Ceren Budak, Brendan Nyhan, David M.
Rothschild, Emily Thorson, Duncan J. Watts The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems... In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek...
In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe. Furthermore, social media organizations need to provide corrections to misinformation and point out that information may be wrong or misleading. Second, the findings highlight the importance of media literacy education (Chen et al., 2022; Fendt et al., 2023). These media literacy programs should promote critical thinking skills and provide concrete strategies and techniques individuals can deploy for fact-checking and verifying information.
The following article was originally published by Annenberg School of Communications. In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content. Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Stevens University Professor Duncan Watts, study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research, Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College, and Emily Thorson of Syracuse University, review years... A broad claim like “it is well known that social media amplifies misinformation and other harmful content,” recently published in The New York Times, might catch people’s attention, but it isn’t supported by empirical...
What Public Discourse Gets Wrong about Social Media Misinformation The spread of misinformation on social media platforms has become a significant concern in recent years, impacting public health, political discourse, and societal trust. However, popular narratives surrounding this issue often oversimplify the problem and misdirect potential solutions. A deeper understanding of the mechanisms driving misinformation is crucial for developing effective strategies to combat it. This article examines the complexities of social media misinformation, highlighting key misconceptions and offering a more nuanced perspective. One common misconception is the focus on individual "bad actors" as the primary source of misinformation.
While malicious actors undoubtedly contribute to the problem, emphasizing individual culpability overlooks the systemic issues at play. The algorithms that govern social media platforms are designed to maximize engagement, often inadvertently amplifying sensationalized and emotionally charged content, regardless of its veracity. Furthermore, the networked structure of social media facilitates the rapid dissemination of information, making it challenging to contain the spread of false narratives. Addressing misinformation requires a shift from blaming individuals to understanding and reforming the underlying architecture of these platforms. Another oversimplification is the belief that simply providing accurate information will counter the effects of misinformation. The "deficit model" of communication, which assumes that people lack knowledge and will readily accept corrective information, fails to account for the complex psychological and social factors influencing belief formation.
People often cling to existing beliefs, even in the face of contradictory evidence, especially when those beliefs are tied to their social identity or political affiliations. Moreover, the sheer volume of information available online creates an "infodemic," making it difficult for individuals to discern credible sources from misleading ones. Effective countermeasures require acknowledging the cognitive biases and social dynamics that shape belief and developing strategies that address these underlying factors. The role of social media companies in combating misinformation is also often misrepresented. While these companies have a responsibility to address the issue, calls for censorship and content moderation raise complex questions about free speech and the potential for bias. Striking a balance between protecting users from harmful content and respecting freedom of expression is a challenging task, requiring careful consideration of ethical and legal implications.
Furthermore, solely relying on platform-based solutions ignores the broader societal context in which misinformation thrives. Addressing the root causes of misinformation requires collaborative efforts involving not only social media companies but also policymakers, educators, researchers, and civil society organizations.
People Also Search
- What Public Discourse Gets Wrong About Misinformation Online
- Misunderstanding the harms of online misinformation | Nature
- Public Discourse and the Misconceptions Surrounding Social Media ...
- The Misconceptions of Public Discourse Regarding Online Misinformation ...
- What Public Discourse Gets Wrong About Social Media Misinformation
- Misinformed about misinformation: On the polarizing discourse on ...
- Misunderstanding the Harms of Online Misinformation
- Spread of misinformation on social media: What contributes to it and ...
- Public Discourse Misconceptions Regarding Social Media Misinformation
A New Study From The Computational Social Science Lab Shows
A new study from the Computational Social Science Lab shows that while online misinformation exists, it isn’t as pervasive as pundits and the press suggest. In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm twea...
'13) Of Syracuse University, Review Years Of Behavioral Science Research
'13) of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all... A broad claim like “it is well known that social media amplifies misinformation and other harmful content,” recently published in The New York Times, might catch people’s attention, but ...
In Our Review Of Behavioural Science Research On Online Misinformation,
In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek... In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and ...
Get Nature+, Our Best-value Online-access Subscription What Public Discourse Gets
Get Nature+, our best-value online-access subscription What Public Discourse Gets Wrong about Social Media Misinformation The proliferation of misinformation on social media platforms has become a pressing societal concern, sparking heated debates and prompting calls for greater regulation. However, current public discourse often oversimplifies the issue, focusing narrowly on platform accountabili...
While Platforms Undoubtedly Bear Some Responsibility For The Content Hosted
While platforms undoubtedly bear some responsibility for the content hosted on their services, this narrative overlooks the crucial role of individual users in creating and disseminating misinformation. Focusing solely on platform accountability risks neglecting the underlying societal factors that contribute to the susceptibility of individuals to false information. Another common misconception i...