Combating Ai Driven Disinformation Expert Strategies For Identifying
The Rise of AI-Generated Fake News: A Threat to Democratic Discourse The advent of artificial intelligence, particularly Large Language Models (LLMs), has revolutionized the creation and dissemination of information. While offering immense potential benefits, this technological advancement has also amplified the spread of misinformation, posing a significant threat to democratic processes, especially during election cycles. With the 2024 elections approaching in the United States and other major democracies, concerns about the proliferation of AI-generated fake news have reached a fever pitch. The ability of these advanced algorithms to generate human-quality text, coupled with tools like Sora that can produce realistic video footage, makes distinguishing genuine news from fabricated content increasingly difficult. The Mechanics of AI-Driven Disinformation
As Walid Saad, an engineering and machine learning expert at Virginia Tech explains, the creation of fake news websites predates the AI revolution. However, AI, particularly LLMs, has drastically simplified the process of generating seemingly credible articles and stories by automating the sifting through vast datasets and crafting convincing narratives. This AI-assisted refinement of misinformation makes fake news sites more insidious and persuasive. The continuous operation of these websites is fueled by the engagement they receive. As long as misinformation is shared widely on social media platforms, the individuals behind these operations will continue their deceptive practices. Combating AI-Powered Fake News: A Multifaceted Approach
Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org
Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false information. This study aims to provide a comprehensive analysis of how AI technologies have been utilized from 2014 to 2024 to detect, analyze, and mitigate the impact of misinformation across various platforms. This research utilized an exhaustive search across prominent databases such as ProQuest, IEEE Explore, Web of Science, and Scopus. Articles published within the specified timeframe were meticulously screened, resulting in the identification of 8103 studies.
Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sophisticated algorithms and natural language processing. They further emphasize the integration of human oversight and continual algorithm refinement emerges as pivotal in augmenting AI’s effectiveness in discerning and countering misinformation. By fostering collaboration across sectors and leveraging the insights gleaned from this study, researchers can propel the development of ethical and effective AI solutions. This is a preview of subscription content, log in via an institution to check access.
Price excludes VAT (USA) Tax calculation will be finalised during checkout. The data presented in this study are available on request from the corresponding author. Baptista JP, Gradim A (2022) A working definition of fake news. Encyclopedia 2(1):66 Our research integrity and auditing teams lead the rigorous process that protects the quality of the scientific record López-Borrull, A.; Lopezosa, C.
Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications 2025, 13, 33. https://doi.org/10.3390/publications13030033 López-Borrull A, Lopezosa C. Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications.
2025; 13(3):33. https://doi.org/10.3390/publications13030033 López-Borrull, Alexandre, and Carlos Lopezosa. 2025. "Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review" Publications 13, no. 3: 33.
https://doi.org/10.3390/publications13030033 López-Borrull, A., & Lopezosa, C. (2025). Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications, 13(3), 33. https://doi.org/10.3390/publications13030033
Researchers Leverage AI to Combat the Rising Tide of Disinformation In an era defined by the rapid dissemination of information online, the proliferation of disinformation poses a significant threat to democratic processes, public health, and societal cohesion. Recognizing the urgency of this challenge, researchers are increasingly turning to artificial intelligence (AI) as a powerful tool in the fight against misleading and fabricated content. These advanced technologies offer the potential to automatically detect, analyze, and even dismantle disinformation campaigns at a scale previously unimaginable. From identifying manipulated media to tracking the spread of false narratives, AI is emerging as a crucial ally in the battle for truth and accuracy online. One of the key applications of AI in disinformation detection lies in its ability to analyze textual data.
Natural language processing (NLP) algorithms can sift through vast quantities of text, identifying linguistic patterns and stylistic cues that often indicate fabricated or misleading content. For example, AI can detect the use of emotionally charged language, logical fallacies, and inconsistencies within a narrative. By analyzing the sentiment, tone, and context of online posts, AI can flag potentially problematic content for further review by human fact-checkers. This collaborative approach leverages the speed and scalability of AI while retaining the critical thinking and nuanced judgment of human experts. Further bolstering this approach, AI can also be used to analyze the source and propagation patterns of disinformation, helping to identify malicious actors and understand how false narratives spread across online networks. Beyond text analysis, AI is proving invaluable in the detection of manipulated media, such as deepfakes and other forms of synthetic content.
These sophisticated manipulations, which can create realistic but entirely fabricated videos and images, pose a particularly potent threat in the disinformation landscape. AI algorithms are being trained to recognize subtle inconsistencies and artifacts within manipulated media, such as unnatural blinking patterns, distorted facial features, or inconsistencies in lighting and shadows. These algorithms can analyze the digital fingerprints of images and videos, helping to determine their authenticity and provenance. As deepfake technology becomes increasingly sophisticated, the development of robust AI-powered detection tools is becoming ever more critical. Furthermore, AI is playing a vital role in understanding the complex dynamics of disinformation campaigns. By analyzing the spread of false narratives across social media platforms and online forums, researchers can gain valuable insights into the strategies and tactics employed by disinformation actors.
AI can track the propagation of specific pieces of content, identify key influencers and amplifiers within a network, and map the interconnectedness of different disinformation campaigns. This information can be used to develop targeted interventions aimed at disrupting the spread of disinformation and mitigating its impact. AI makes it easier to create disinformation, false or decontextualized content, and to spread it quickly through existing channels. (Photo: Canva) In an information ecosystem where misinformation circulates faster than fact-checkers can respond, increasingly precise and efficient tools are needed to verify content, detect hoaxes and understand how false narratives spread. The following list brings together five tools that media outlets and fact-checking organizations use for tasks ranging from tracking disinformation and analyzing its dissemination patterns, to recovering deleted content and analyzing audiovisual material.
Fact Check Explorer allows users to insert a phrase, piece of data or a link to check if someone has already verified it. (Photo: Screenshot) Google has developed an ecosystem of fact-checking tools, some for fact-checkers specifically and others for the general public. The flagship tool is Fact Check Explorer, a specialized search engine that compiles claim reviews from multiple fact-checking organizations worldwide, including Chequeado (Argentina), Bolivia Verifica (Bolivia), El Sabueso (Mexico) and Cotejo.info (Venezuela).
People Also Search
- Combating AI-Driven Disinformation: Expert Strategies for Identifying ...
- Unmasking Deception: Strategies to Combat AI-Driven Disinformation
- AI-driven disinformation: policy recommendations for democratic ...
- AI-Generated Misinformation: A Case Study on Emerging Trends in Fact ...
- Artificial intelligence in the battle against disinformation and ...
- The use of artificial intelligence in counter-disinformation: a world ...
- From AI Fact-Checks to User Understanding: Explaining Misinformation ...
- Mapping the Impact of Generative AI on Disinformation: Insights ... - MDPI
- AI-Driven Detection and Mitigation of Disinformation | DISA
- Five tools to detect, analyze and counter disinformation
The Rise Of AI-Generated Fake News: A Threat To Democratic
The Rise of AI-Generated Fake News: A Threat to Democratic Discourse The advent of artificial intelligence, particularly Large Language Models (LLMs), has revolutionized the creation and dissemination of information. While offering immense potential benefits, this technological advancement has also amplified the spread of misinformation, posing a significant threat to democratic processes, especia...
As Walid Saad, An Engineering And Machine Learning Expert At
As Walid Saad, an engineering and machine learning expert at Virginia Tech explains, the creation of fake news websites predates the AI revolution. However, AI, particularly LLMs, has drastically simplified the process of generating seemingly credible articles and stories by automating the sifting through vast datasets and crafting convincing narratives. This AI-assisted refinement of misinformati...
Edited By: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed
Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org
Received 2025 Jan 31; Accepted 2025 Jun 30; Collection Date
Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false ...
Through Elimination Of Duplicates And Screening Based On Title, Abstract,
Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sop...