How Ai Can Also Be Used To Combat Online Disinformation
Despite being used to create deepfakes, AI can also be used to combat misinformation and disinformation. Image: Gilles Lambert/Unsplash The proliferation of artificial intelligence (AI) in the digital age has ushered in both remarkable innovations and unique challenges, particularly in the realm of information integrity. AI technologies, with their capability to generate convincing fake texts, images, audio and videos (often referred to as 'deepfakes'), present significant difficulties in distinguishing authentic content from synthetic creations. This capability lets wrongdoers automate and expand disinformation campaigns, greatly increasing their reach and impact. However, AI is not a villain in this story.
It also plays a crucial role in combating disinformation and misinformation. Advanced AI-driven systems can analyse patterns, language use and context to aid in content moderation, fact-checking and the detection of false information. Understanding the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread) – as is crucial for effective countermeasures – could also be facilitated by AI analysis of content. Your research is the real superpower - learn how we maximise its impact through our leading community journals Published online by Cambridge University Press: 25 November 2021 Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing.
Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by... This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation.
While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce... We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem. This study aims at identifying the right approach to tackle the disinformation problem online with due consideration for ethical values, fundamental rights and freedoms, and democracy. While moderating content as such and using AI systems to that end may be particularly problematic regarding freedom of expression and information, we recommend countering the malicious use of technologies online to manipulate individuals. As considering the main cause of the effective manipulation of individuals online is paramount, the business model of the web should be on the radar screen of public regulation more than content moderation.
Furthermore, we do support a vibrant, independent, and pluralistic media landscape with investigative journalists following ethical rules. Manipulation of truth is a recurring phenomenon throughout history.Footnote 1 Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient... Nevertheless, owing to the rapid advances in information and communication technologies (ICT) as well as their increasing pervasiveness, disingenuous information can now be produced easily and in a realistic format, and its dissemination to... The consequences are serious with far-reaching implications. For instance, the media ecosystem has been leveraged to influence citizens’ opinion and voting decisions related to the 2016 US presidential electionFootnote 2 and the 2016 UK referendum on leaving the European Union (EU)... In Myanmar, Facebook has been a useful instrument for those seeking to spread hate against Rohingya Muslims (Human Rights Council, 2018, para 74).Footnote 3 In India, rumors on WhatsApp resulted in several murders (Dixit...
In France, a virulent online campaign on social media against a professor ended up with him being murdered (Bindner and Gluck, Reference Bindner and Gluck2020). Conspiracy theories are currently prospering.Footnote 4 And presently in the context of the Covid-19, we are facing what has been called an infodemic Footnote 5 by the World Health Organization (WHO), with multiple adverse... As commonly understood, disinformation is false, inaccurate or misleading information that is shared with the intent to deceive the recipient,Footnote 6 as opposed to misinformation that refers to false, inaccurate, or misleading information that... Whereas new digital technology and social media have amplified the creation and spread of both mis- and disinformation, only disinformation has been considered by the EU institutions as a threat that must be tackled... The disinformation problem is particular in the sense that, firstly, the shared information is intentionally deceptive to manipulate people and, secondly, for achieving his or her goal, its author takes benefit from the modern... For these reasons, our analysis stays on the beaten path, hence the title of this article referring solely to the disinformation problem.
It is also worth specifying that unlike “fake news,” a term that has been used by politicians and their supporters to dismiss coverage that they find disagreeable, the disinformation problem encompasses various fabricated information... Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom
*Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online... In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”
The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and... The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. “Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital... In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false information.
This study aims to provide a comprehensive analysis of how AI technologies have been utilized from 2014 to 2024 to detect, analyze, and mitigate the impact of misinformation across various platforms. This research utilized an exhaustive search across prominent databases such as ProQuest, IEEE Explore, Web of Science, and Scopus. Articles published within the specified timeframe were meticulously screened, resulting in the identification of 8103 studies. Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sophisticated algorithms and natural language processing.
They further emphasize the integration of human oversight and continual algorithm refinement emerges as pivotal in augmenting AI’s effectiveness in discerning and countering misinformation. By fostering collaboration across sectors and leveraging the insights gleaned from this study, researchers can propel the development of ethical and effective AI solutions. This is a preview of subscription content, log in via an institution to check access. Price excludes VAT (USA) Tax calculation will be finalised during checkout. The data presented in this study are available on request from the corresponding author. Baptista JP, Gradim A (2022) A working definition of fake news.
Encyclopedia 2(1):66 Researchers Leverage AI to Combat the Rising Tide of Disinformation In an era defined by the rapid dissemination of information online, the proliferation of disinformation poses a significant threat to democratic processes, public health, and societal cohesion. Recognizing the urgency of this challenge, researchers are increasingly turning to artificial intelligence (AI) as a powerful tool in the fight against misleading and fabricated content. These advanced technologies offer the potential to automatically detect, analyze, and even dismantle disinformation campaigns at a scale previously unimaginable. From identifying manipulated media to tracking the spread of false narratives, AI is emerging as a crucial ally in the battle for truth and accuracy online.
One of the key applications of AI in disinformation detection lies in its ability to analyze textual data. Natural language processing (NLP) algorithms can sift through vast quantities of text, identifying linguistic patterns and stylistic cues that often indicate fabricated or misleading content. For example, AI can detect the use of emotionally charged language, logical fallacies, and inconsistencies within a narrative. By analyzing the sentiment, tone, and context of online posts, AI can flag potentially problematic content for further review by human fact-checkers. This collaborative approach leverages the speed and scalability of AI while retaining the critical thinking and nuanced judgment of human experts. Further bolstering this approach, AI can also be used to analyze the source and propagation patterns of disinformation, helping to identify malicious actors and understand how false narratives spread across online networks.
Beyond text analysis, AI is proving invaluable in the detection of manipulated media, such as deepfakes and other forms of synthetic content. These sophisticated manipulations, which can create realistic but entirely fabricated videos and images, pose a particularly potent threat in the disinformation landscape. AI algorithms are being trained to recognize subtle inconsistencies and artifacts within manipulated media, such as unnatural blinking patterns, distorted facial features, or inconsistencies in lighting and shadows. These algorithms can analyze the digital fingerprints of images and videos, helping to determine their authenticity and provenance. As deepfake technology becomes increasingly sophisticated, the development of robust AI-powered detection tools is becoming ever more critical. Furthermore, AI is playing a vital role in understanding the complex dynamics of disinformation campaigns.
By analyzing the spread of false narratives across social media platforms and online forums, researchers can gain valuable insights into the strategies and tactics employed by disinformation actors. AI can track the propagation of specific pieces of content, identify key influencers and amplifiers within a network, and map the interconnectedness of different disinformation campaigns. This information can be used to develop targeted interventions aimed at disrupting the spread of disinformation and mitigating its impact. López-Borrull, A.; Lopezosa, C. Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications 2025, 13, 33.
https://doi.org/10.3390/publications13030033 López-Borrull A, Lopezosa C. Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review. Publications. 2025; 13(3):33. https://doi.org/10.3390/publications13030033
López-Borrull, Alexandre, and Carlos Lopezosa. 2025. "Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review" Publications 13, no. 3: 33. https://doi.org/10.3390/publications13030033 López-Borrull, A., & Lopezosa, C.
People Also Search
- How AI can also be used to combat online disinformation
- The use of artificial intelligence in counter-disinformation: a world ...
- The role of artificial intelligence in disinformation
- AI-driven disinformation: policy recommendations for democratic ...
- How generative AI is boosting the spread of disinformation and ...
- AI and Misinformation: How to Combat False Content in 2025
- Artificial intelligence in the battle against disinformation and ...
- AI-Driven Detection and Mitigation of Disinformation | DISA
- Mapping the Impact of Generative AI on Disinformation: Insights ... - MDPI
- How AI Can Help Stop the Spread of Misinformation
Despite Being Used To Create Deepfakes, AI Can Also Be
Despite being used to create deepfakes, AI can also be used to combat misinformation and disinformation. Image: Gilles Lambert/Unsplash The proliferation of artificial intelligence (AI) in the digital age has ushered in both remarkable innovations and unique challenges, particularly in the realm of information integrity. AI technologies, with their capability to generate convincing fake texts, ima...
It Also Plays A Crucial Role In Combating Disinformation And
It also plays a crucial role in combating disinformation and misinformation. Advanced AI-driven systems can analyse patterns, language use and context to aid in content moderation, fact-checking and the detection of false information. Understanding the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread) – as is crucial for effective countermea...
Such Systems Boost The Problem Not Only By Increasing Opportunities
Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by... This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI syst...
While With This Proposal, The Commission Focusses On The Regulation
While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce... We also ...
Furthermore, We Do Support A Vibrant, Independent, And Pluralistic Media
Furthermore, we do support a vibrant, independent, and pluralistic media landscape with investigative journalists following ethical rules. Manipulation of truth is a recurring phenomenon throughout history.Footnote 1 Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient... Nevertheless, owing t...