What Public Discourse Gets Wrong About Social Media Misinformation

Bonisiwe Shabane
-
what public discourse gets wrong about social media misinformation

The following article was originally published by Annenberg School of Communications. In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content. Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Stevens University Professor Duncan Watts, study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research, Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College, and Emily Thorson of Syracuse University, review years... A broad claim like “it is well known that social media amplifies misinformation and other harmful content,” recently published in The New York Times, might catch people’s attention, but it isn’t supported by empirical...

What Public Discourse Gets Wrong about Social Media Misinformation The spread of misinformation on social media platforms has become a significant concern in recent years, impacting public health, political discourse, and societal trust. However, popular narratives surrounding this issue often oversimplify the problem and misdirect potential solutions. A deeper understanding of the mechanisms driving misinformation is crucial for developing effective strategies to combat it. This article examines the complexities of social media misinformation, highlighting key misconceptions and offering a more nuanced perspective. One common misconception is the focus on individual "bad actors" as the primary source of misinformation.

While malicious actors undoubtedly contribute to the problem, emphasizing individual culpability overlooks the systemic issues at play. The algorithms that govern social media platforms are designed to maximize engagement, often inadvertently amplifying sensationalized and emotionally charged content, regardless of its veracity. Furthermore, the networked structure of social media facilitates the rapid dissemination of information, making it challenging to contain the spread of false narratives. Addressing misinformation requires a shift from blaming individuals to understanding and reforming the underlying architecture of these platforms. Another oversimplification is the belief that simply providing accurate information will counter the effects of misinformation. The "deficit model" of communication, which assumes that people lack knowledge and will readily accept corrective information, fails to account for the complex psychological and social factors influencing belief formation.

People often cling to existing beliefs, even in the face of contradictory evidence, especially when those beliefs are tied to their social identity or political affiliations. Moreover, the sheer volume of information available online creates an "infodemic," making it difficult for individuals to discern credible sources from misleading ones. Effective countermeasures require acknowledging the cognitive biases and social dynamics that shape belief and developing strategies that address these underlying factors. The role of social media companies in combating misinformation is also often misrepresented. While these companies have a responsibility to address the issue, calls for censorship and content moderation raise complex questions about free speech and the potential for bias. Striking a balance between protecting users from harmful content and respecting freedom of expression is a challenging task, requiring careful consideration of ethical and legal implications.

Furthermore, solely relying on platform-based solutions ignores the broader societal context in which misinformation thrives. Addressing the root causes of misinformation requires collaborative efforts involving not only social media companies but also policymakers, educators, researchers, and civil society organizations. What Public Discourse Gets Wrong about Social Media Misinformation The proliferation of misinformation on social media platforms has become a pressing societal concern, sparking heated debates and prompting calls for greater regulation. However, current public discourse often oversimplifies the issue, focusing narrowly on platform accountability while neglecting the complex interplay of factors that contribute to the spread of false or misleading information. This article delves into the nuances of social media misinformation, examining the limitations of prevailing narratives and offering a more comprehensive understanding of the problem.

A dominant narrative in the public sphere frames social media platforms as the primary culprits in the misinformation crisis, portraying them as irresponsible actors prioritizing profit over the well-being of their users. While platforms undoubtedly bear some responsibility for the content hosted on their services, this narrative overlooks the crucial role of individual users in creating and disseminating misinformation. Focusing solely on platform accountability risks neglecting the underlying societal factors that contribute to the susceptibility of individuals to false information. Another common misconception is the belief that misinformation spreads primarily through coordinated disinformation campaigns orchestrated by malicious actors. While such campaigns certainly exist and can have significant impact, research suggests that much of the misinformation circulating online originates from ordinary users inadvertently sharing false or misleading content. This highlights the importance of media literacy and critical thinking skills in combating the spread of misinformation.

Furthermore, public discourse often fails to adequately address the diversity of motivations behind the creation and dissemination of misinformation. While some individuals may intentionally spread false information for political or financial gain, others may do so out of genuine belief or a desire to belong to a particular online community. Understanding these diverse motivations is crucial for developing effective interventions. Furthermore, social media organizations need to provide corrections to misinformation and point out that information may be wrong or misleading. Second, the findings highlight the importance of media literacy education (Chen et al., 2022; Fendt et al., 2023). These media literacy programs should promote critical thinking skills and provide concrete strategies and techniques individuals can deploy for fact-checking and verifying information.

Ceren Budak, Brendan Nyhan, David M. Rothschild, Emily Thorson, Duncan J. Watts The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions: that average exposure to problematic content is high, that algorithms are largely responsible for this exposure and that social media is a primary cause of broader social problems...

In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek... In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe. The link between social media and misinformation is undeniable. Misinformation, particularly the kind that evokes emotion, spreads like wildfire on social media and has serious consequences, like undermining democratic processes, discrediting science, and promulgating hateful discourses which may incite physical violence.

If left unchecked, misinformation propagated through social media has the potential to incite social disorder, as seen in countless ethnic clashes worldwide. This is why social media platforms have been under growing pressure to combat misinformation and have been developing models such as fact-checking services and community notes to check its spread. This article explores the pros and cons of the models and evaluates their broader implications for online information integrity. Meta’s uptake of a crowdsourced model signals social media’s shift toward decentralized content moderation, giving users more influence in what gets flagged and why. However, the model’s reliance on diverse agreements can be a time-consuming process. A study (by Wirtschafter & Majumder, 2023) shows that only about 12.5 per cent of all submitted notes are seen by the public, making most misleading content go unchecked.

Further, many notes on divisive issues like politics and elections may not see the light of day since reaching a consensus on such topics is hard. This means that many misleading posts may not be publicly flagged at all, thereby hindering risk mitigation efforts. This casts aspersions on the model’s ability to check the virality of posts which can have adverse societal impacts, especially on vulnerable communities. On the other hand, the fact-checking model suffers from a lack of transparency, which has damaged user trust and led to allegations of bias. Since both models have their advantages and disadvantages, the future of misinformation control will require a hybrid approach. Data accuracy and polarization through social media are issues bigger than an exclusive tool or model can effectively handle.

Thus, platforms can combine expert validation with crowdsourced input to allow for accuracy, transparency, and scalability. Meta’s shift to a crowdsourced model of fact-checking is likely to have bigger implications on public discourse since social media platforms hold immense power in terms of how their policies affect politics, the economy,... This change comes against the background of sweeping cost-cutting in the tech industry, political changes in the USA and abroad, and increasing attempts to make Big Tech platforms more accountable in jurisdictions like the... These co-occurring contestations are likely to inform the direction the development of misinformation-countering tactics will take. Until then, the crowdsourcing model is still in development, and its efficacy is yet to be seen, especially regarding polarizing topics. In an era expounded by rapid communications and live coverage of global affairs, users often encounter misinformation continuously, and it has emerged as a huge challenge.

Misinformation is false or inaccurate information, believed to be true, and shared without any intention to deceive. On the other hand, disinformation refers to false information that is intended to mislead, especially with set propaganda. It steadily affects all aspects of life and can even lead to a profound impact on geopolitics, international relations, wars, etc. When modern media announces “breaking news,” it captures attention and keeps viewers engaged. In the rush for television rating points, information may be circulated without proper fact-checking. This urgency can result in the spread of unverified claims and the elevation of irrelevant details, while truly important issues are overlooked.

Such practices can distort public understanding and impact strategic political decisions. Hand with smartphone, e-news concept. Casual young man using mobile app to read online newspaper © Bits And Splits / shutterstock.com This article saved into your bookmarks. Click here to view your bookmarks. In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online.

Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content. Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania led by Stevens University Professor Duncan Watts study Americans’ news consumption. In a new article in Nature, Watts, along with David Rothschild of Microsoft Research, Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College and Emily Thorson of Syracuse University, review years... This study uses data from the Survey Series on People and their Communities (SSPC) to explore how Canadians are navigating the complexities of today’s information environment. Specifically, it examines the characteristics of those who reported having high levels of concern about misinformation online and how this concern may relate to perceptions of media trustworthiness, confidence in institutions, hopefulness about national... The information landscape has changed dramatically in the past twenty years, with news and information readily available at our fingertips.

Research has shown that many Canadians now rely on online platforms as their main source of information. A recent studyNote found that close to 6 in 10 Canadians got their news and information from the Internet (33%) or social media (24%), with the remainder relying on more traditional sources such as... With an increased convenience and volume of online information in our current digital era comes greater opportunities for the spread of misinformation,Note which refers to news or information that is verifiably false or inaccurate. Indeed, awareness of and concern about misinformation are growing, but its impacts on Canadian society are still being explored. In 2023, 59% of Canadians reported being very or extremely concerned about misinformation online and 43% of Canadians found it harder to distinguish between true and false news or information compared to three years... Using data from the 2023-24 Survey Series on People and their Communities, this study provides new insights about Canadians who express greater concern over misinformation online, which can be helpful for understanding the broader...

The first section of this article examines how having high levels of concern over misinformation differs across population groups. As some studies suggest that misinformation can reduce trust in the media, erode public confidence in institutions, and potentially undermine social cohesion and other indicators of well-being,Note the second section considers how concern over... In 2023, nearly 6 in 10 Canadians (59%) reported that they were very or extremely concerned about the presence of misinformation online. Another 27% reported that they were somewhat concerned, meanwhile 14% of Canadians said that they were not very or not at all concerned about online misinformation.

People Also Search

The Following Article Was Originally Published By Annenberg School Of

The following article was originally published by Annenberg School of Communications. In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online. Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social...

What Public Discourse Gets Wrong About Social Media Misinformation The

What Public Discourse Gets Wrong about Social Media Misinformation The spread of misinformation on social media platforms has become a significant concern in recent years, impacting public health, political discourse, and societal trust. However, popular narratives surrounding this issue often oversimplify the problem and misdirect potential solutions. A deeper understanding of the mechanisms driv...

While Malicious Actors Undoubtedly Contribute To The Problem, Emphasizing Individual

While malicious actors undoubtedly contribute to the problem, emphasizing individual culpability overlooks the systemic issues at play. The algorithms that govern social media platforms are designed to maximize engagement, often inadvertently amplifying sensationalized and emotionally charged content, regardless of its veracity. Furthermore, the networked structure of social media facilitates the ...

People Often Cling To Existing Beliefs, Even In The Face

People often cling to existing beliefs, even in the face of contradictory evidence, especially when those beliefs are tied to their social identity or political affiliations. Moreover, the sheer volume of information available online creates an "infodemic," making it difficult for individuals to discern credible sources from misleading ones. Effective countermeasures require acknowledging the cogn...

Furthermore, Solely Relying On Platform-based Solutions Ignores The Broader Societal

Furthermore, solely relying on platform-based solutions ignores the broader societal context in which misinformation thrives. Addressing the root causes of misinformation requires collaborative efforts involving not only social media companies but also policymakers, educators, researchers, and civil society organizations. What Public Discourse Gets Wrong about Social Media Misinformation The proli...