Combating Disinformation Needs Human Led Ai Enabled Disruption

Bonisiwe Shabane
-
combating disinformation needs human led ai enabled disruption

To disrupt disinformation, a dual-front strategy is needed: curbing the supply of AI-enabled falsehoods while transforming the psychological and cultural structures. With more newsrooms incorporating artificial intelligence into their daily operations, a hybrid approach combining human oversight and AI automation has emerged as a promising tool to combat the rising tide of disinformation. AI in journalism is far from straightforward. While it has made routine tasks more efficient, it has also exposed the complex challenges newsrooms face amid rapid technological advances. At a time when AI-generated content is reshaping public sentiment and trust within the digital media landscape, its dual impact cannot be ignored – AI enhances efficiency and creative possibilities, but also raises significant... The solution appears to lie within the problem itself, but it depends on rigorous ethical frameworks and oversight, requiring coordinated action from researchers, policymakers, industry, and media stakeholders.

Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org

Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false information. This study aims to provide a comprehensive analysis of how AI technologies have been utilized from 2014 to 2024 to detect, analyze, and mitigate the impact of misinformation across various platforms. This research utilized an exhaustive search across prominent databases such as ProQuest, IEEE Explore, Web of Science, and Scopus. Articles published within the specified timeframe were meticulously screened, resulting in the identification of 8103 studies.

Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sophisticated algorithms and natural language processing. They further emphasize the integration of human oversight and continual algorithm refinement emerges as pivotal in augmenting AI’s effectiveness in discerning and countering misinformation. By fostering collaboration across sectors and leveraging the insights gleaned from this study, researchers can propel the development of ethical and effective AI solutions. This is a preview of subscription content, log in via an institution to check access.

Price excludes VAT (USA) Tax calculation will be finalised during checkout. The data presented in this study are available on request from the corresponding author. Baptista JP, Gradim A (2022) A working definition of fake news. Encyclopedia 2(1):66 Despite being used to create deepfakes, AI can also be used to combat misinformation and disinformation. Image: Gilles Lambert/Unsplash

The proliferation of artificial intelligence (AI) in the digital age has ushered in both remarkable innovations and unique challenges, particularly in the realm of information integrity. AI technologies, with their capability to generate convincing fake texts, images, audio and videos (often referred to as 'deepfakes'), present significant difficulties in distinguishing authentic content from synthetic creations. This capability lets wrongdoers automate and expand disinformation campaigns, greatly increasing their reach and impact. However, AI is not a villain in this story. It also plays a crucial role in combating disinformation and misinformation. Advanced AI-driven systems can analyse patterns, language use and context to aid in content moderation, fact-checking and the detection of false information.

Understanding the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread) – as is crucial for effective countermeasures – could also be facilitated by AI analysis of content. Perfect storm: the development of AI has exacerbated the problem of the creation and spread of disinformation. Image: Gefo / Adobe Stock With the 2024 NATO Washington summit now concluded, the UK must address the significant threat posed by AI and disinformation to global security. Recent changes in governments across NATO countries – with the potential for more in the near future – have unfolded as several countries commit to increased levels of defence spending. UK Prime Minister Keir Starmer went even further by declaring his government’s intention to reach a 2.5% commitment and announcing plans to publish a 2025 strategic defence review.

The shift in support of defence spending is driven in part by concerns that a potential second Trump presidency will lead to a US retreat from NATO countries that fail to meet the 2%... Additionally, the current threat of kinetic warfare on the European continent – unprecedented since the Second World War – has compelled many European countries to enhance their military capabilities and bolster their defence institutions... This includes investing in modern, more technology-enabled capability, boosting weapons production, and launching extensive armed forces recruitment programmes. While working to support large-scale procurement programmes, threats in the information domain must neither be overlooked nor underestimated. This necessitates continuous efforts to counter information wars that authoritarian countries wage in the online space. In the absence of appropriate data and AI governance standards, the development of AI has exacerbated the problem of the creation and spread of disinformation, requiring NATO countries to seek ways to combat and...

In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online... In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and... The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence.

“Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital... Your research is the real superpower - learn how we maximise its impact through our leading community journals Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns.

This is Part 1 of a two-part series. Click here for Part 2. The age of information has brought with it the age of disinformation. Powered by the speed and data volume of the internet, disinformation has emerged as an insidious instrument of geopolitical power competition and domestic political warfare. It is used by both state and non-state actors to shape global public opinion, sow chaos, and chip away at trust. Artificial intelligence (AI), specifically machine learning (ML), is poised to amplify disinformation campaigns—influence operations that involve covert efforts to intentionally spread false or misleading information.

In this series, we examine how these technologies could be used to spread disinformation. Part 1 considers disinformation campaigns and the set of stages or building blocks used by human operators. In many ways they resemble a digital marketing campaign, one with malicious intent to disrupt and deceive. We offer a framework, RICHDATA, to describe the stages of disinformation campaigns and commonly used techniques. Part 2 of the series examines how AI/ML technologies may shape future disinformation campaigns. We break disinformation campaigns into multiple stages.

Through reconnaissance, operators surveil the environment and understand the audience that they are trying to manipulate. They require infrastructure—messengers, believable personas, social media accounts, and groups—to carry their narratives. A ceaseless flow of content, from posts and long-reads to photos, memes, and videos, is a must to ensure their messages seed, root, and grow. Once deployed into the stream of the internet, these units of disinformation are amplified by bots, platform algorithms, and social-engineering techniques to spread the campaign’s narratives. But blasting disinformation is not always enough: broad impact comes from sustained engagement with unwitting users through trolling—the disinformation equivalent of hand-to-hand combat. In its final stage, a disinformation operation is actualized by changing the minds of unwitting targets or even mobilizing them to action to sow chaos.

Regardless of origin, disinformation campaigns that grow an organic following can become endemic to a society and indistinguishable from its authentic discourse. They can undermine a society’s ability to discern fact from fiction creating a lasting trust deficit. JAMES P. RUBIN was Senior Adviser to U.S. Secretaries of State Antony Blinken and Madeleine Albright and served as Special Envoy and Coordinator of the State Department’s Global Engagement Center during the Biden administration. He is a co-host, with Christiane Amanpour, of the podcast The Ex Files.

DARJAN VUJICA was Director of Analytics at the U.S. State Department’s Global Engagement Center from 2019 to 2021 and Emerging Technology Coordinator at the U.S. Embassy in New Delhi from 2024 to 2025. In June, the secure Signal account of a European foreign minister pinged with a text message. The sender claimed to be U.S. Secretary of State Marco Rubio with an urgent request.

A short time later, two other foreign ministers, a U.S. governor, and a member of Congress received the same message, this time accompanied by a sophisticated voice memo impersonating Rubio. Although the communication appeared to be authentic, its tone matching what would be expected from a senior official, it was actually a malicious forgery—a deepfake, engineered with artificial intelligence by unknown actors. Had the lie not been caught, the stunt had the potential to sow discord, compromise American diplomacy, or extract sensitive intelligence from Washington’s foreign partners. This was not the last disquieting example of AI enabling malign actors to conduct information warfare—the manipulation and distribution of information to gain an advantage over an adversary. In August, researchers at Vanderbilt University revealed that a Chinese tech firm, GoLaxy, had used AI to build data profiles of at least 117 sitting U.S.

lawmakers and over 2,000 American public figures. The data could be used to construct plausible AI-generated personas that mimic those figures and craft messaging campaigns that appeal to the psychological traits of their followers. GoLaxy’s goal, demonstrated in parallel campaigns in Hong Kong and Taiwan, was to build the capability to deliver millions of different, customized lies to millions of individuals at once. Disinformation is not a new problem, but the introduction of AI has made it significantly easier for malicious actors to develop increasingly effective influence operations and to do so cheaply and at scale. In response, the U.S. government should be expanding and refining its tools for identifying and shutting down these campaigns.

People Also Search

To Disrupt Disinformation, A Dual-front Strategy Is Needed: Curbing The

To disrupt disinformation, a dual-front strategy is needed: curbing the supply of AI-enabled falsehoods while transforming the psychological and cultural structures. With more newsrooms incorporating artificial intelligence into their daily operations, a hybrid approach combining human oversight and AI automation has emerged as a promising tool to combat the rising tide of disinformation. AI in jo...

Edited By: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed

Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org

Received 2025 Jan 31; Accepted 2025 Jun 30; Collection Date

Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false ...

Through Elimination Of Duplicates And Screening Based On Title, Abstract,

Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sop...

Price Excludes VAT (USA) Tax Calculation Will Be Finalised During

Price excludes VAT (USA) Tax calculation will be finalised during checkout. The data presented in this study are available on request from the corresponding author. Baptista JP, Gradim A (2022) A working definition of fake news. Encyclopedia 2(1):66 Despite being used to create deepfakes, AI can also be used to combat misinformation and disinformation. Image: Gilles Lambert/Unsplash