Towards An Ai Based Counter Disinformation Framework

Bonisiwe Shabane
-
towards an ai based counter disinformation framework

In this blog post, Linda Slapakova discusses the various roles that AI plays in counter-disinformation efforts, the prevailing shortfalls of AI-based counter-disinformation tools and the technical, governance and regulatory barriers to their uptake, and... Disinformation has become a defining feature of the COVID-19 crisis. With social media bots (i.e. automated agents engaging on social networks) nearly twice as active during COVID-19 as opposed to past crises and national elections, the public and private sectors have struggled to address the rapid spreading of false... This has highlighted the need for effective, innovative tools to detect and strengthen institutional and societal resilience against disinformation. The leveraging of Artificial Intelligence (AI) represents one avenue for the development and use of such tools.

To provide a holistic assessment of the opportunities of an AI-based counter-disinformation framework, this blog firstly discusses the various roles that AI plays in counter-disinformation efforts. Next, it discusses the prevailing shortfalls of AI-based counter-disinformation tools and the technical, governance and regulatory barriers to their uptake, and how these could be addressed to foster the uptake of AI-based solutions for... Emerging technologies, including AI, are often described as a double-edged sword with relation to information threats. On the one hand, emerging technologies can enable more sophisticated online information threats and often lower the barriers to entry for malign actors. On the other hand, they can provide significant opportunities for countering such threats. This has been no less true in the case of AI and disinformation.

Though the majority of malign information on social media is spread by relatively simple bot technology, existing evidence suggests that AI is being leveraged for more sophisticated online manipulation techniques. The extent of the use of AI in this context is difficult to measure, but many information security experts believe that AI is already being leveraged by malign actors, for example to better determine... ‘what to attack, who to attack, [and] when to attack’). This enables more targeted attacks and thus more effective information threats, including disinformation campaigns. Recent advances in AI techniques such as Natural Language Processing (NLP) have also given rise to concerns that AI may be used to create more authentic synthetic text (e.g. fake social media posts, articles and documents).

Moreover, Deepfakes (i.e. the leveraging of AI to create highly authentic and realistic manipulated audio-visual material) represent a prominent example of image-based AI-enabled information threat. Our research integrity and auditing teams lead the rigorous process that protects the quality of the scientific record Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D.

Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. The Rise of AI in the Fight Against Disinformation: A Collaborative Approach The digital age, characterized by the proliferation of online platforms and social media, has witnessed an unprecedented surge in the spread of disinformation.

Traditional methods of media regulation and censorship struggle to keep pace with the speed and scale of this challenge. Artificial intelligence (AI) is increasingly viewed as a crucial tool in combating this infodemic, offering the potential to automate the detection and flagging of misleading information. International organizations, governments, and private companies are investing heavily in AI-powered solutions. However, the success of these initiatives hinges on a collaborative approach, integrating top-down interventions with bottom-up empowerment of journalists and civil society organizations. Mapping the AI-Powered Anti-Disinformation Landscape: A Web-Based Approach To understand the evolving landscape of AI-driven anti-disinformation initiatives, researchers employed a web mapping methodology.

This approach leverages the interconnected nature of the web, utilizing hyperlinks as proxies for social connections. By analyzing the citation structure of websites dedicated to combating disinformation with AI, researchers gain insight into the networks of actors involved and their respective strategies. This study specifically examined 81 websites actively engaged in developing or utilizing AI against disinformation, creating a network map visualizing their interconnections. Unveiling Three Distinct Clusters: Europe, the US, and Fact-Checking Agencies In the rapidly evolving digital age, the proliferation of disinformation and misinformation poses significant challenges to societal trust and information integrity. Recognizing the urgency of addressing this issue, this systematic review endeavors to explore the role of artificial intelligence (AI) in combating the spread of false information.

This study aims to provide a comprehensive analysis of how AI technologies have been utilized from 2014 to 2024 to detect, analyze, and mitigate the impact of misinformation across various platforms. This research utilized an exhaustive search across prominent databases such as ProQuest, IEEE Explore, Web of Science, and Scopus. Articles published within the specified timeframe were meticulously screened, resulting in the identification of 8103 studies. Through elimination of duplicates and screening based on title, abstract, and full-text review, we meticulously distilled this vast pool to 76 studies that met the study’s eligibility criteria. Key findings from the review emphasize the advancements and challenges in AI applications for combating misinformation. These findings highlight AI’s capacity to enhance information verification through sophisticated algorithms and natural language processing.

They further emphasize the integration of human oversight and continual algorithm refinement emerges as pivotal in augmenting AI’s effectiveness in discerning and countering misinformation. By fostering collaboration across sectors and leveraging the insights gleaned from this study, researchers can propel the development of ethical and effective AI solutions. This is a preview of subscription content, log in via an institution to check access. Price excludes VAT (USA) Tax calculation will be finalised during checkout. The data presented in this study are available on request from the corresponding author. Baptista JP, Gradim A (2022) A working definition of fake news.

Encyclopedia 2(1):66

People Also Search

In This Blog Post, Linda Slapakova Discusses The Various Roles

In this blog post, Linda Slapakova discusses the various roles that AI plays in counter-disinformation efforts, the prevailing shortfalls of AI-based counter-disinformation tools and the technical, governance and regulatory barriers to their uptake, and... Disinformation has become a defining feature of the COVID-19 crisis. With social media bots (i.e. automated agents engaging on social networks)...

To Provide A Holistic Assessment Of The Opportunities Of An

To provide a holistic assessment of the opportunities of an AI-based counter-disinformation framework, this blog firstly discusses the various roles that AI plays in counter-disinformation efforts. Next, it discusses the prevailing shortfalls of AI-based counter-disinformation tools and the technical, governance and regulatory barriers to their uptake, and how these could be addressed to foster th...

Though The Majority Of Malign Information On Social Media Is

Though the majority of malign information on social media is spread by relatively simple bot technology, existing evidence suggests that AI is being leveraged for more sophisticated online manipulation techniques. The extent of the use of AI in this context is difficult to measure, but many information security experts believe that AI is already being leveraged by malign actors, for example to bet...

Moreover, Deepfakes (i.e. The Leveraging Of AI To Create Highly

Moreover, Deepfakes (i.e. the leveraging of AI to create highly authentic and realistic manipulated audio-visual material) represent a prominent example of image-based AI-enabled information threat. Our research integrity and auditing teams lead the rigorous process that protects the quality of the scientific record Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed ...

Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University Of

Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. The Rise of AI in the Fight Against Disinformation: A Collaborative Approach The digital age, characterized by the proliferation of online platforms and social me...