Combating Foreign Disinformation On Social Media

Bonisiwe Shabane
-
combating foreign disinformation on social media

The Disinformation Pandemic: A Deep Dive into the Challenges and Collaborative Solutions Social media, once hailed as a revolutionary tool for connection and information sharing, has increasingly become a breeding ground for disinformation, the deliberate spread of false or misleading information. This "infodemic" poses a significant threat to democratic processes, societal cohesion, and trust in institutions. From undermining elections to fueling social unrest and eroding public health, the consequences of disinformation are far-reaching and demand immediate attention. The motivations behind disinformation campaigns are diverse. Some actors spread conspiracy theories and divisive narratives for ideological reasons or personal amusement.

Political actors might engage in disinformation to sway public opinion in their favor, while foreign adversaries may seek to destabilize other nations or advance their geopolitical agendas. Financially motivated actors spread scams and clickbait for profit, whereas competitors might aim to tarnish the reputations of rivals. Understanding these varied motivations is crucial for developing effective countermeasures. The rapid growth of disinformation is driven by several factors. Social media algorithms often prioritize sensational and emotionally charged content, inadvertently amplifying false information. Studies have shown that fake news spreads significantly faster and wider than factual information on these platforms.

Moreover, the emergence of generative AI has made it easier than ever to create highly convincing deepfakes, synthetic images, and fabricated text, blurring the lines between reality and fiction. The proliferation of AI-powered bots further exacerbates the problem, flooding social media with automated disinformation campaigns that reach vast audiences. Combating this infodemic requires a concerted and collaborative effort. Social media platforms, governments, organizations, and individuals all have a crucial role to play in prioritizing truth and mitigating the spread of disinformation. Academia.edu no longer supports Internet Explorer. To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.

The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. To learn more about RAND, visit www.rand.org. Research Integrity Our mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and objectivity and our unwavering commitment to the highest level of integrity... To help ensure our research and analysis are rigorous, objective, and nonpartisan, we subject our research publications to a robust and exacting quality-assurance process; avoid both the appearance and reality of financial and other... For more information, visit www.rand.org/about/principles.

RAND's publications do not necessarily reflect the opinions of its research clients and sponsors. The dissemination of purposely deceitful or misleading content to target audiences for political aims or economic purposes constitutes a threat to democratic societies and institutions, and is being increasingly recognized as a major security... Disinformation can also be part of hybrid threat activities. This research paper examines findings on the effects of disinformation and addresses the question of how effective counterstrategies against digital disinformation are, with the aim of assessing the impact of responses such as the... The paper’s objective is to synthetize the main scientific findings on disinformation effects and on the effectiveness of debunking, inoculation, and forewarning strategies against digital disinformation. A mixed methodology is used, combining qualit...

Social media have democratized communication but have led to the explosion of the socalled "fake news" phenomenon. This problem has visible implications on global security, both political (e.g.the QANON case) and health (anti-Covid vaccination and No-Vax fake news). Models that detect the problem in real time and on large amounts of data are needed. Digital methods and text classification procedures are able to do this through predictive approaches to identify a suspect message or author. This paper aims to apply a supervised model to the study of fake news on the Twittersphere to highlight its potential and preliminary limitations. The case study is the infodemic generated on social media during the first phase of the COVID-19 emergency.

The application of the supervised model involved the use of a training and testing dataset. The different preliminary steps to build the training dataset are also shown, highlighting, with a critical approach, the challenges of working with supervised algorithms. Two aspects emerge. The first is that it is important to block the sources of bad information, before the information itself. The second is that algorithms could be sources of bias. Social media companies need to be very careful about relying on automated classification.

In an era where social media platforms have become battlegrounds for information integrity, a new study sheds light on the mechanics of disinformation spread and offers innovative solutions to counteract it. Conducted by a team of researchers from Brandeis University, George Mason University, Massachusetts Institute of Technology, and Carnegie Mellon University examined the dynamics of “disinformation wars,” which refers to the intentional spread of fake... This method has proved to be alarmingly effective in misleading the public. "Our findings reveal a disturbing trend of entities engaging in disinformation wars, using the anonymity and reach of social media to influence political and social narratives," said Maryam Saeedi, Assistant Professor of Economics at... “This manipulation is not only a direct threat to democratic processes but also to the general public's ability to discern truth from fiction.” The research emphasizes the urgency of addressing the spread of fake news, particularly highlighting the sophisticated tactics employed by some regimes and organizations to manipulate public opinion.

The study introduces a new preemptive strategy known as "ex-ante content moderation," which involves assigning a disinformation score to accounts based on their likelihood of spreading false information. This approach aims to proactively identify and mitigate the impact of disinformation before it reaches a wide audience. Humanities and Social Sciences Communications volume 12, Article number: 803 (2025) Cite this article In response to disinformation projected by authoritarian regimes and antagonistic actors, states and institutions implement various countermeasures to fortify the information space. This includes warnings and educational efforts. Paradoxically, such information, meant to counter disinformation, can render people excessively vigilant and skeptical of reliable information.

Can this be avoided? This study assesses the idea that framing unreliable information as a foreign threat can enhance the effectiveness of media literacy training while reducing the risk of excessive skepticism toward trustworthy sources. The authors tested two media literacy training videos in a preregistered randomized controlled experiment with a between-subjects design, using a nationally representative sample of the Swedish population (N = 1054). One video focused on source criticism without referencing external threats, while the other focused on the problem of disinformation from foreign actors, specifically Russia. Unlike findings from the U.S., the results show that both experimental groups improved their ability to identify unreliable information without becoming more distrustful of credible information from domestic media, public service outlets, and government... The group exposed to the external threat narrative demonstrated the highest level of discernment and greatest trust in credible information.

These findings offer reason for cautious optimism about the potential of media literacy training. The article ends by problematizing the findings and suggesting avenues for future research. Governments and other organizations have launched various countermeasures to address the problem of disinformation disseminated by foreign adversaries. Paradoxically, research has shown that warnings and educational measures intended to fortify societies against disinformation can potentially erode trust in both democracy (Ross et al. 2022; Jungherr and Rauchfleisch, 2024) and in reliable information and trustworthy media (van der Meer et al. 2023; Clayton et al.

2020; Hameleers, 2023). This inadvertently serves the objectives of authoritarian states such as Russia, which disseminate disinformation in part to foster a general sense of suspicion, confusion, and cynicism that undermines societal stability long-term (Bennett and Livingston,... This naturally leads to the question of how we can inoculate against disinformation without producing unintended consequences. In this study, we advance research on disinformation countermeasures by examining a novel link between media literacy, threat perceptions, and social cohesion. We propose that a media literacy education that highlights the foreign nature of certain disinformation campaigns may help citizens discern and resist unreliable information without losing trust in reliable media and governmental communication. We believe that awareness of an external threat, in tandem with a heightened national identity salience that strengthens social cohesion, is likely to move people from a default position of third-person bias (“it’s other...

This could increase motivation to successfully learn and apply what one is taught in media literacy training. To assess this idea, we examined the effect rates of two educational videos in a preregistered experiment in Sweden. The study emerged in response to calls for additional research on how to inform about disinformation without causing negative side effects (Clayton et al. 2020; Jungherr and Rauchfleisch, 2024). Findings from previous experiments on excessive vigilance to reliable media are indecisive, with relatively small effect sizes and inconsistent results across test groups. Researching the balance between gullibility and skepticism, Hameleers (2023) tested the effects of disinformation interventions in the Netherlands and the U.S., finding statistically significant effects of reduced trust in reliable media only among U.S.

participants. Furthermore, Humprecht et al. (2020, p. 507) highlight the need for more research on how resilience to disinformation can be sustained in countries where vulnerability to online disinformation has thus far been a lesser problem. Research on misleading information in general has primarily focused on the U.S. (Tucker et al.

2018; Humprecht et al. 2020), which, compared to Northern Europe, is a low-trust society (Ortiz-Ospina et al. 2016). Aiming to explain the varying occurrence of disinformation, Humprecht et al. (2020) found that the U.S.—characterized by low media trust, populist rhetoric, polarization, and a politicized and fragmented media environment—displayed low resilience to online disinformation, whereas countries in Western Europe and Canada—characterized by high media... Being a high-trust society (Ortiz-Ospina et al.

2016), Sweden is in the ‘high resilience’ category. It is also a key target of Russian influence operations (Ramsay and Robertshaw, 2019, p. 80). This makes Sweden a suitable case for advancing research on potentially negative side effects of informing about unreliable information. Social media has become a double-edged sword. On one side, it has revolutionised communication, enabling people to connect, share ideas, and mobilise for social change at an unprecedented scale.

On the other side, social media has become a breeding ground for disinformation where false, misleading or derogatory information is spread deliberately to deceive people or to plant false narratives. The consequences of disinformation are far-reaching – undermining democratic processes, polarising societies and eroding trust in institutions. There are numerous motivations behind social media disinformation. Some love to push out conspiracy theories, hate speech or divisive narratives. Bipartisan actors want to peddle certain narratives that are more favourable towards their political party. Foreign adversaries from Russia, China, Iran and North Korea promote narratives for their own geo-political or nationalistic agendas.

People Also Search

The Disinformation Pandemic: A Deep Dive Into The Challenges And

The Disinformation Pandemic: A Deep Dive into the Challenges and Collaborative Solutions Social media, once hailed as a revolutionary tool for connection and information sharing, has increasingly become a breeding ground for disinformation, the deliberate spread of false or misleading information. This "infodemic" poses a significant threat to democratic processes, societal cohesion, and trust in ...

Political Actors Might Engage In Disinformation To Sway Public Opinion

Political actors might engage in disinformation to sway public opinion in their favor, while foreign adversaries may seek to destabilize other nations or advance their geopolitical agendas. Financially motivated actors spread scams and clickbait for profit, whereas competitors might aim to tarnish the reputations of rivals. Understanding these varied motivations is crucial for developing effective...

Moreover, The Emergence Of Generative AI Has Made It Easier

Moreover, the emergence of generative AI has made it easier than ever to create highly convincing deepfakes, synthetic images, and fabricated text, blurring the lines between reality and fiction. The proliferation of AI-powered bots further exacerbates the problem, flooding social media with automated disinformation campaigns that reach vast audiences. Combating this infodemic requires a concerted...

The RAND Corporation Is A Research Organization That Develops Solutions

The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest. To learn more about RAND, visit www.rand.org. Research Integrity Our mission to help improve policy and decisionmaking through res...

RAND's Publications Do Not Necessarily Reflect The Opinions Of Its

RAND's publications do not necessarily reflect the opinions of its research clients and sponsors. The dissemination of purposely deceitful or misleading content to target audiences for political aims or economic purposes constitutes a threat to democratic societies and institutions, and is being increasingly recognized as a major security... Disinformation can also be part of hybrid threat activit...