Responsibility Of Stakeholders In Counter Ai Powered Disinformation

Bonisiwe Shabane
-
responsibility of stakeholders in counter ai powered disinformation

Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org

Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. Published online by Cambridge University Press: 25 November 2021 Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by... This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online.

Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce... We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem.

This study aims at identifying the right approach to tackle the disinformation problem online with due consideration for ethical values, fundamental rights and freedoms, and democracy. While moderating content as such and using AI systems to that end may be particularly problematic regarding freedom of expression and information, we recommend countering the malicious use of technologies online to manipulate individuals. As considering the main cause of the effective manipulation of individuals online is paramount, the business model of the web should be on the radar screen of public regulation more than content moderation. Furthermore, we do support a vibrant, independent, and pluralistic media landscape with investigative journalists following ethical rules. Manipulation of truth is a recurring phenomenon throughout history.Footnote 1 Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient... Nevertheless, owing to the rapid advances in information and communication technologies (ICT) as well as their increasing pervasiveness, disingenuous information can now be produced easily and in a realistic format, and its dissemination to...

The consequences are serious with far-reaching implications. For instance, the media ecosystem has been leveraged to influence citizens’ opinion and voting decisions related to the 2016 US presidential electionFootnote 2 and the 2016 UK referendum on leaving the European Union (EU)... In Myanmar, Facebook has been a useful instrument for those seeking to spread hate against Rohingya Muslims (Human Rights Council, 2018, para 74).Footnote 3 In India, rumors on WhatsApp resulted in several murders (Dixit... In France, a virulent online campaign on social media against a professor ended up with him being murdered (Bindner and Gluck, Reference Bindner and Gluck2020). Conspiracy theories are currently prospering.Footnote 4 And presently in the context of the Covid-19, we are facing what has been called an infodemic Footnote 5 by the World Health Organization (WHO), with multiple adverse... As commonly understood, disinformation is false, inaccurate or misleading information that is shared with the intent to deceive the recipient,Footnote 6 as opposed to misinformation that refers to false, inaccurate, or misleading information that...

Whereas new digital technology and social media have amplified the creation and spread of both mis- and disinformation, only disinformation has been considered by the EU institutions as a threat that must be tackled... The disinformation problem is particular in the sense that, firstly, the shared information is intentionally deceptive to manipulate people and, secondly, for achieving his or her goal, its author takes benefit from the modern... For these reasons, our analysis stays on the beaten path, hence the title of this article referring solely to the disinformation problem. It is also worth specifying that unlike “fake news,” a term that has been used by politicians and their supporters to dismiss coverage that they find disagreeable, the disinformation problem encompasses various fabricated information... Our research integrity and auditing teams lead the rigorous process that protects the quality of the scientific record AI: A Double-Edged Sword for Democracy in the Information Age

The rapid advancement and proliferation of artificial intelligence (AI) present both unprecedented opportunities and significant challenges to the global information ecosystem and, consequently, the foundations of democracy. This duality was a central theme addressed by Melissa Fleming, UN Under-Secretary General for Global Communication, during her participation in the "The Day When AI Would Replace Democracy" panel at the 2024 Guadalajara International... Fleming emphasized the urgent need for international cooperation and regulatory frameworks to harness AI’s potential while mitigating its risks, particularly in combating the escalating crisis of mis- and disinformation. Fleming acknowledged the transformative power of AI, highlighting its potential to accelerate progress towards achieving the Sustainable Development Goals. She cited examples of AI-powered tools like Food AI, HungerMap LIVE, and PulseSatellite, which are already contributing to humanitarian responses, climate action, and peacebuilding efforts. These examples demonstrate the potential of AI to address some of the world’s most pressing challenges.

However, she cautioned against the "dark side" of this technology, emphasizing the growing threat of AI-generated disinformation, including deepfakes used for political manipulation. This manipulation erodes public trust in information sources and democratic institutions, a trend observed in numerous elections worldwide in 2024. The proliferation of AI-generated fake news is exacerbating an already fragile information landscape. Fleming pointed to the declining traditional media business model, largely attributed to the influence of AI-driven algorithms on social media platforms. This decline contributes to an oversaturation of information, much of which is unverified or deliberately misleading. The resulting inability of the public to distinguish truth from fiction further undermines trust in credible information sources, a cornerstone of democratic societies.

Fleming urged audiences to actively support reliable media outlets, emphasizing the importance of media literacy and critical thinking in navigating the complex digital landscape. Addressing the need for global governance in the face of these challenges, Fleming called for inclusive and equitable frameworks that prioritize human rights and the needs of vulnerable populations. She highlighted the UN’s Global Digital Compact, a landmark agreement aimed at fostering international cooperation on AI governance and digital inclusion. The compact proposes the establishment of an International Scientific Panel on AI and Emerging Technologies, modeled after the Intergovernmental Panel on Climate Change (IPCC), to conduct independent, evidence-based assessments of AI’s risks and opportunities. This panel would bring together experts from various disciplines to ensure that AI development benefits all of humanity. Despite being used to create deepfakes, AI can also be used to combat misinformation and disinformation.

Image: Gilles Lambert/Unsplash The proliferation of artificial intelligence (AI) in the digital age has ushered in both remarkable innovations and unique challenges, particularly in the realm of information integrity. AI technologies, with their capability to generate convincing fake texts, images, audio and videos (often referred to as 'deepfakes'), present significant difficulties in distinguishing authentic content from synthetic creations. This capability lets wrongdoers automate and expand disinformation campaigns, greatly increasing their reach and impact. However, AI is not a villain in this story. It also plays a crucial role in combating disinformation and misinformation.

Advanced AI-driven systems can analyse patterns, language use and context to aid in content moderation, fact-checking and the detection of false information. Understanding the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread) – as is crucial for effective countermeasures – could also be facilitated by AI analysis of content. Your research is the real superpower - learn how we maximise its impact through our leading community journals

People Also Search

Edited By: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed

Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org

Received 2025 Jan 31; Accepted 2025 Jun 30; Collection Date

Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. Published online by Cambridge University Press: 25 November 2021 Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essenti...

Such Systems Do Not Escape From Ethical And Human Rights

Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large...

This Study Aims At Identifying The Right Approach To Tackle

This study aims at identifying the right approach to tackle the disinformation problem online with due consideration for ethical values, fundamental rights and freedoms, and democracy. While moderating content as such and using AI systems to that end may be particularly problematic regarding freedom of expression and information, we recommend countering the malicious use of technologies online to ...

The Consequences Are Serious With Far-reaching Implications. For Instance, The

The consequences are serious with far-reaching implications. For instance, the media ecosystem has been leveraged to influence citizens’ opinion and voting decisions related to the 2016 US presidential electionFootnote 2 and the 2016 UK referendum on leaving the European Union (EU)... In Myanmar, Facebook has been a useful instrument for those seeking to spread hate against Rohingya Muslims (Human...