Misunderstanding The Harms Of Online Misinformation Oid
The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems... In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek... In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it.
Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe. Nature volume 630, pages 45–53 (2024)Cite this article The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems... In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek...
In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription
In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content. Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Duncan Watts, Stevens University Professor in Computer and Information Science (CIS), study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research (Wharton Ph.D. ‘11 and PI in the CSSLab), Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College and Annenberg alumnus Emily Thorson (Ph.D. ’13) of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all...
“The research shows that only a small fraction of people are exposed to false and radical content online,” says Rothschild, “and that it’s personal preferences, not algorithms that lead people to this content. The people who are exposed to false and radical content are those who seek it out.” Misunderstanding The Harms Of Online Misinformation Budak, Ceren; Nyhan, Brendan; Rothschild, David M.; Thorson, Emily; Watts, Duncan J. Covid-19 Misinformation, Fake News and Politicization, Information Freedom, Russian Trolls, Social Media and Politics, Twitter Research Sign-up to our newsletter to be informed about latest developments: our Unpacking Current Developments in the Information Space Insight Series, our newsletter, news from our network, events and publications.
2025 - Observatory on Information & Democracy Subodh Mishra is Global Head of Communications at ISS STOXX. This post is based on an ISS ESG memorandum by Avleen Kaur, Corporate Ratings Research Sector Head for Technology, Media, and Telecommunications, at ISS ESG. In an era of rapidly evolving digital technologies, information integrity has become a growing concern. Current threats include “misinformation,” defined as inaccurate information shared without the intent to cause harm; and “disinformation,” inaccurate information deliberately disseminated with the purpose of deceiving audiences and doing harm. According to the World Economic Forum’s Global Risks Report 2025, survey respondents identified misinformation and disinformation as leading global risks.
Moreover, misinformation and disinformation can interact with and be exacerbated by other technological and societal factors, such as the rise of AI-generated content. This post examines some contemporary online risks, including problems highlighted by ISS ESG Screening & Controversies data. Additional data from the ISS ESG Corporate Rating offer insight into how companies in the Interactive Media and Online Communications industry are responding to such risks. The post also reviews evolving regulation that is shaping the digital landscape and the response to misinformation, disinformation, and related threats. With an estimated two-thirds of the global population having an online presence, the majority of whom are also social media users, the number of people such content might reach has also expanded significantly.
People Also Search
- Misunderstanding the harms of online misinformation - PubMed
- Misunderstanding the harms of online misinformation - Nature
- Misunderstanding the harms of online misinformation.
- Misunderstanding the Harms of Online Misinformation
- PDF Misunderstanding the harms of online misinformation
- Misunderstanding The Harms Of Online Misinformation - OID
- Misinformation and Disinformation in the Digital Age: A Rising Risk for ...
The Controversy Over Online Misinformation And Social Media Has Opened
The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic ...
Taking These Steps Is Especially Important Outside The USA And
Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe. Nature volume 630, pages 45–53 (2024)Cite this article The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about ...
In Response, We Recommend Holding Platforms Accountable For Facilitating Exposure
In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective r...
In 2006, Facebook Launched Its News Feed Feature, Sparking Seemingly
In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misi...
“The Research Shows That Only A Small Fraction Of People
“The research shows that only a small fraction of people are exposed to false and radical content online,” says Rothschild, “and that it’s personal preferences, not algorithms that lead people to this content. The people who are exposed to false and radical content are those who seek it out.” Misunderstanding The Harms Of Online Misinformation Budak, Ceren; Nyhan, Brendan; Rothschild, David M.; Th...