Combating Ai Generated Disinformation With Artificial Intelligence
The Rise of Misinformation and Its Impact on Organizations The digital age, while connecting the world in unprecedented ways, has also unleashed a torrent of misinformation, often amplified by the very algorithms designed to promote sharing on social media platforms. Events like Brexit, the 2016 US elections, and the COVID-19 pandemic have highlighted the potent influence of manipulated narratives and "fake news" on public opinion. This phenomenon extends beyond the political sphere, impacting discussions on everything from environmental concerns to technological advancements. The rapid spread of false information online, often fueled by sensationalism and confirmation bias, poses a significant threat to organizations, potentially damaging their reputation, internal culture, and productivity. False narratives can erode trust among employees, leading to decreased collaboration, increased conflict, and ultimately, a decline in overall performance.
The modern information landscape demands proactive strategies to combat misinformation and safeguard organizational health. The Erosion of Trust and the Need for Intervention The proliferation of misinformation creates an environment of distrust, mirroring the dynamics of a financial bank run. Individuals, influenced by false narratives, may react swiftly and negatively towards organizations, impacting brand loyalty and employee morale. This erosion of trust, driven by a "divide-and-conquer" strategy inherent in much of the misinformation spread online, can severely disrupt internal operations. Surveys reveal widespread concern about fake news in the workplace, with a noticeable increase in associated negative behaviors like criticism, dismissal of ideas, and even outright lying.
This atmosphere of suspicion hinders open communication and collaboration, critical elements for organizational success. Trust is the bedrock of a productive and innovative work environment. When employees trust their employers and each other, they are more motivated, engaged, and likely to contribute effectively. Conversely, a lack of trust stifles creativity, impedes decision-making, and ultimately, undermines the organization’s ability to thrive. The human tendency towards sensationalism, coupled with the speed at which information travels online, makes us particularly vulnerable to misinformation. Studies have shown that false news spreads significantly faster than true news on social media platforms, exploiting our cognitive biases and preference for emotionally charged content.
However, this same technology that facilitates the spread of misinformation also offers tools to combat it. Artificial intelligence, particularly in the form of large language models (LLMs), presents a powerful defense against fake news. Unlike humans, AI is not swayed by emotions or biases, offering a more objective approach to information analysis. LLMs can access vast datasets, cross-referencing claims with verified facts and historical data to identify inconsistencies and potential falsehoods. This ability to rapidly process and analyze information makes AI a valuable ally in the fight against misinformation. Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States
Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025.
Published online by Cambridge University Press: 25 November 2021 Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by... This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information.
Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce... We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem. This study aims at identifying the right approach to tackle the disinformation problem online with due consideration for ethical values, fundamental rights and freedoms, and democracy.
While moderating content as such and using AI systems to that end may be particularly problematic regarding freedom of expression and information, we recommend countering the malicious use of technologies online to manipulate individuals. As considering the main cause of the effective manipulation of individuals online is paramount, the business model of the web should be on the radar screen of public regulation more than content moderation. Furthermore, we do support a vibrant, independent, and pluralistic media landscape with investigative journalists following ethical rules. Manipulation of truth is a recurring phenomenon throughout history.Footnote 1 Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient... Nevertheless, owing to the rapid advances in information and communication technologies (ICT) as well as their increasing pervasiveness, disingenuous information can now be produced easily and in a realistic format, and its dissemination to... The consequences are serious with far-reaching implications.
For instance, the media ecosystem has been leveraged to influence citizens’ opinion and voting decisions related to the 2016 US presidential electionFootnote 2 and the 2016 UK referendum on leaving the European Union (EU)... In Myanmar, Facebook has been a useful instrument for those seeking to spread hate against Rohingya Muslims (Human Rights Council, 2018, para 74).Footnote 3 In India, rumors on WhatsApp resulted in several murders (Dixit... In France, a virulent online campaign on social media against a professor ended up with him being murdered (Bindner and Gluck, Reference Bindner and Gluck2020). Conspiracy theories are currently prospering.Footnote 4 And presently in the context of the Covid-19, we are facing what has been called an infodemic Footnote 5 by the World Health Organization (WHO), with multiple adverse... As commonly understood, disinformation is false, inaccurate or misleading information that is shared with the intent to deceive the recipient,Footnote 6 as opposed to misinformation that refers to false, inaccurate, or misleading information that... Whereas new digital technology and social media have amplified the creation and spread of both mis- and disinformation, only disinformation has been considered by the EU institutions as a threat that must be tackled...
The disinformation problem is particular in the sense that, firstly, the shared information is intentionally deceptive to manipulate people and, secondly, for achieving his or her goal, its author takes benefit from the modern... For these reasons, our analysis stays on the beaten path, hence the title of this article referring solely to the disinformation problem. It is also worth specifying that unlike “fake news,” a term that has been used by politicians and their supporters to dismiss coverage that they find disagreeable, the disinformation problem encompasses various fabricated information... In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false information. This study aims to provide a comprehensive analysis of how AI technologies have been utilized from 2014 to 2024 to detect, analyze, and mitigate the impact of misinformation across various platforms.
This research utilized an exhaustive search across prominent databases such as ProQuest, IEEE Explore, Web of Science, and Scopus. Articles published within the specified timeframe were meticulously screened, resulting in the identification of 8103 studies. Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sophisticated algorithms and natural language processing. They further emphasize the integration of human oversight and continual algorithm refinement emerges as pivotal in augmenting AI’s effectiveness in discerning and countering misinformation.
By fostering collaboration across sectors and leveraging the insights gleaned from this study, researchers can propel the development of ethical and effective AI solutions. This is a preview of subscription content, log in via an institution to check access. Price excludes VAT (USA) Tax calculation will be finalised during checkout. The data presented in this study are available on request from the corresponding author. Baptista JP, Gradim A (2022) A working definition of fake news. Encyclopedia 2(1):66
JAMES P. RUBIN was Senior Adviser to U.S. Secretaries of State Antony Blinken and Madeleine Albright and served as Special Envoy and Coordinator of the State Department’s Global Engagement Center during the Biden administration. He is a co-host, with Christiane Amanpour, of the podcast The Ex Files. DARJAN VUJICA was Director of Analytics at the U.S. State Department’s Global Engagement Center from 2019 to 2021 and Emerging Technology Coordinator at the U.S.
Embassy in New Delhi from 2024 to 2025. In June, the secure Signal account of a European foreign minister pinged with a text message. The sender claimed to be U.S. Secretary of State Marco Rubio with an urgent request. A short time later, two other foreign ministers, a U.S. governor, and a member of Congress received the same message, this time accompanied by a sophisticated voice memo impersonating Rubio.
Although the communication appeared to be authentic, its tone matching what would be expected from a senior official, it was actually a malicious forgery—a deepfake, engineered with artificial intelligence by unknown actors. Had the lie not been caught, the stunt had the potential to sow discord, compromise American diplomacy, or extract sensitive intelligence from Washington’s foreign partners. This was not the last disquieting example of AI enabling malign actors to conduct information warfare—the manipulation and distribution of information to gain an advantage over an adversary. In August, researchers at Vanderbilt University revealed that a Chinese tech firm, GoLaxy, had used AI to build data profiles of at least 117 sitting U.S. lawmakers and over 2,000 American public figures. The data could be used to construct plausible AI-generated personas that mimic those figures and craft messaging campaigns that appeal to the psychological traits of their followers.
GoLaxy’s goal, demonstrated in parallel campaigns in Hong Kong and Taiwan, was to build the capability to deliver millions of different, customized lies to millions of individuals at once. Disinformation is not a new problem, but the introduction of AI has made it significantly easier for malicious actors to develop increasingly effective influence operations and to do so cheaply and at scale. In response, the U.S. government should be expanding and refining its tools for identifying and shutting down these campaigns. Instead, the Trump administration has been disarming, scaling back U.S. defenses against foreign disinformation and leaving the country woefully unprepared to handle AI-powered attacks.
Unless the U.S. government reinvests in the institutions and expertise needed to counter information warfare, digital influence campaigns will progressively undermine public trust in democratic institutions, processes, and leadership—threatening to deliver American democracy a death by a... Our research integrity and auditing teams lead the rigorous process that protects the quality of the scientific record López-Borrull, A.; Lopezosa, C. Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications 2025, 13, 33.
People Also Search
- Combating AI-Generated Disinformation with Artificial Intelligence
- Disinformation in the Age of Artificial Intelligence (AI): Implications ...
- AI-driven disinformation: policy recommendations for democratic ...
- The role of artificial intelligence in disinformation
- Artificial intelligence in the battle against disinformation and ...
- Protecting Society from AI-Generated Misinformation: A Guide for ...
- AI Is Supercharging Disinformation Warfare | Foreign Affairs
- The use of artificial intelligence in counter-disinformation: a world ...
- Mapping the Impact of Generative AI on Disinformation: Insights ... - MDPI
- How AI can also be used to combat online disinformation
The Rise Of Misinformation And Its Impact On Organizations The
The Rise of Misinformation and Its Impact on Organizations The digital age, while connecting the world in unprecedented ways, has also unleashed a torrent of misinformation, often amplified by the very algorithms designed to promote sharing on social media platforms. Events like Brexit, the 2016 US elections, and the COVID-19 pandemic have highlighted the potent influence of manipulated narratives...
The Modern Information Landscape Demands Proactive Strategies To Combat Misinformation
The modern information landscape demands proactive strategies to combat misinformation and safeguard organizational health. The Erosion of Trust and the Need for Intervention The proliferation of misinformation creates an environment of distrust, mirroring the dynamics of a financial bank run. Individuals, influenced by false narratives, may react swiftly and negatively towards organizations, impa...
This Atmosphere Of Suspicion Hinders Open Communication And Collaboration, Critical
This atmosphere of suspicion hinders open communication and collaboration, critical elements for organizational success. Trust is the bedrock of a productive and innovative work environment. When employees trust their employers and each other, they are more motivated, engaged, and likely to contribute effectively. Conversely, a lack of trust stifles creativity, impedes decision-making, and ultimat...
However, This Same Technology That Facilitates The Spread Of Misinformation
However, this same technology that facilitates the spread of misinformation also offers tools to combat it. Artificial intelligence, particularly in the form of large language models (LLMs), presents a powerful defense against fake news. Unlike humans, AI is not swayed by emotions or biases, offering a more objective approach to information analysis. LLMs can access vast datasets, cross-referencin...
Reviewed By: J. D. Opdyke, DataMineit, LLC, United States Hugh
Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025.