Countering Ai Powered Disinformation Through National Regulation
Your research is the real superpower - learn how we maximise its impact through our leading community journals Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom
*Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. AI: A Double-Edged Sword for Democracy in the Information Age The rapid advancement and proliferation of artificial intelligence (AI) present both unprecedented opportunities and significant challenges to the global information ecosystem and, consequently, the foundations of democracy. This duality was a central theme addressed by Melissa Fleming, UN Under-Secretary General for Global Communication, during her participation in the "The Day When AI Would Replace Democracy" panel at the 2024 Guadalajara International... Fleming emphasized the urgent need for international cooperation and regulatory frameworks to harness AI’s potential while mitigating its risks, particularly in combating the escalating crisis of mis- and disinformation.
Fleming acknowledged the transformative power of AI, highlighting its potential to accelerate progress towards achieving the Sustainable Development Goals. She cited examples of AI-powered tools like Food AI, HungerMap LIVE, and PulseSatellite, which are already contributing to humanitarian responses, climate action, and peacebuilding efforts. These examples demonstrate the potential of AI to address some of the world’s most pressing challenges. However, she cautioned against the "dark side" of this technology, emphasizing the growing threat of AI-generated disinformation, including deepfakes used for political manipulation. This manipulation erodes public trust in information sources and democratic institutions, a trend observed in numerous elections worldwide in 2024. The proliferation of AI-generated fake news is exacerbating an already fragile information landscape.
Fleming pointed to the declining traditional media business model, largely attributed to the influence of AI-driven algorithms on social media platforms. This decline contributes to an oversaturation of information, much of which is unverified or deliberately misleading. The resulting inability of the public to distinguish truth from fiction further undermines trust in credible information sources, a cornerstone of democratic societies. Fleming urged audiences to actively support reliable media outlets, emphasizing the importance of media literacy and critical thinking in navigating the complex digital landscape. Addressing the need for global governance in the face of these challenges, Fleming called for inclusive and equitable frameworks that prioritize human rights and the needs of vulnerable populations. She highlighted the UN’s Global Digital Compact, a landmark agreement aimed at fostering international cooperation on AI governance and digital inclusion.
The compact proposes the establishment of an International Scientific Panel on AI and Emerging Technologies, modeled after the Intergovernmental Panel on Climate Change (IPCC), to conduct independent, evidence-based assessments of AI’s risks and opportunities. This panel would bring together experts from various disciplines to ensure that AI development benefits all of humanity. The World Economic Forum reported that AI-generated misinformation and disinformation are the second most likely threat to present a material crisis on a global scale in 2024 at 53% (Sept. 2023). Artificial intelligence is automating the creation of fake news at a rate disproportionate to its fact-checking. It is spurring an explosion of web content mimicking factual articles that instead disseminate false information about grave themes such as elections, wars and natural disasters.
According to a report by the Centre for the Study of Democratic Institutions, a Canadian think tank, the most prevalent effect of Generative AI is the ability to flood the information ecosystem with misleading... As reported by Democracy Reporting International during the 2024 elections of the European Union, Google's Gemini, OpenAI’s ChatGPT 3.5 and 4.0, and Microsoft’s AI interface ‘CoPilot’ were inaccurate one-third of the time when engaged... Therefore, a need for an innovative regulatory approach like regulatory sandboxes which can address these challenges while encouraging responsible AI innovation is desired. False or misleading information created, amplified, or spread using artificial intelligence technologies is AI-driven misinformation. Machine learning models are leveraged to automate and scale the creation of false and deceptive content. Some examples are deep fakes, AI-generated news articles, and bots that amplify false narratives on social media.
The biggest challenge is in the detection and management of AI-driven misinformation. It is difficult to distinguish AI-generated content from authentic content, especially as these technologies advance rapidly. AI-driven misinformation can influence elections, public health, and social stability by spreading false or misleading information. While public adoption of the technology has undoubtedly been rapid, it is yet to achieve true acceptance and actually fulfill its potential in a positive manner because there is widespread cynicism about the technology... The general public sentiment about AI is laced with concern and doubt regarding the technology’s trustworthiness, mainly due to the absence of a regulatory framework maturing on par with the technological development. The CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere.
The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here. Last week, The Economist published a review of the burgeoning AI companion industry. The companion industry is gaining momentum globally, with individuals either customizing existing platforms like ChatGPT into romantic partners, with specified ages, professions (such as tech executive), and personality traits encompassing wit, dry humor, and... Others turn to AI companion applications that offer friendship, mentorship, or even therapeutic support. Character.ai, one the most prominent platforms in this space, attracts 20 million monthly users in the United States alone.
American users have invested millions of hours engaging with the “Psychologist” bot, seeking guidance on intimacy challenges, depression, anxiety, and workplace exhaustion. According to The Economist, 42% of American high school students reported using AI as a “friend” within the past year. In China, the leading application “Maoxiang” has also attracted tens of millions of users. Major AI platforms, including ChatGPT, have also announced initiatives to cultivate more “personable” products through refined language and tone, while also introducing novel content such as erotica. Research indicates that LLMs (Large Language Models) are already becoming better companions by mimicking human emotions and empathy, thereby strengthening AI-human relationships. The allure of an AI companion is clear: the AI never forgets a detail, never misses an anniversary, never discourages or offends and is never offline.
Certain studies suggest AI companions reduce feelings of loneliness and isolation, while others studies at MIT have found a correlation between intense use of ChatGPT and greater feelings of isolation. Nevertheless, AI companions may represent “the new social.” As I noted in a previous post, studies and news repots assert that social media is becoming less social. Across age groups, users are withdrawing from sharing personal content on social media. The era of selfies, status updates, and location check-ins has ended. When individuals do share, they circulate content among small groups of friends through Instagram stories or WhatsApp groups. JAMES P.
RUBIN was Senior Adviser to U.S. Secretaries of State Antony Blinken and Madeleine Albright and served as Special Envoy and Coordinator of the State Department’s Global Engagement Center during the Biden administration. He is a co-host, with Christiane Amanpour, of the podcast The Ex Files. DARJAN VUJICA was Director of Analytics at the U.S. State Department’s Global Engagement Center from 2019 to 2021 and Emerging Technology Coordinator at the U.S. Embassy in New Delhi from 2024 to 2025.
In June, the secure Signal account of a European foreign minister pinged with a text message. The sender claimed to be U.S. Secretary of State Marco Rubio with an urgent request. A short time later, two other foreign ministers, a U.S. governor, and a member of Congress received the same message, this time accompanied by a sophisticated voice memo impersonating Rubio. Although the communication appeared to be authentic, its tone matching what would be expected from a senior official, it was actually a malicious forgery—a deepfake, engineered with artificial intelligence by unknown actors.
Had the lie not been caught, the stunt had the potential to sow discord, compromise American diplomacy, or extract sensitive intelligence from Washington’s foreign partners. This was not the last disquieting example of AI enabling malign actors to conduct information warfare—the manipulation and distribution of information to gain an advantage over an adversary. In August, researchers at Vanderbilt University revealed that a Chinese tech firm, GoLaxy, had used AI to build data profiles of at least 117 sitting U.S. lawmakers and over 2,000 American public figures. The data could be used to construct plausible AI-generated personas that mimic those figures and craft messaging campaigns that appeal to the psychological traits of their followers. GoLaxy’s goal, demonstrated in parallel campaigns in Hong Kong and Taiwan, was to build the capability to deliver millions of different, customized lies to millions of individuals at once.
Disinformation is not a new problem, but the introduction of AI has made it significantly easier for malicious actors to develop increasingly effective influence operations and to do so cheaply and at scale. In response, the U.S. government should be expanding and refining its tools for identifying and shutting down these campaigns. Instead, the Trump administration has been disarming, scaling back U.S. defenses against foreign disinformation and leaving the country woefully unprepared to handle AI-powered attacks. Unless the U.S.
government reinvests in the institutions and expertise needed to counter information warfare, digital influence campaigns will progressively undermine public trust in democratic institutions, processes, and leadership—threatening to deliver American democracy a death by a... Last Tuesday, a draft executive order (EO) from the White House overriding states’ artificial intelligence (AI) laws was leaked. The draft EO was a surprise, considering that a moratorium on state AI laws was voted down 99-1 by the U.S. Senate on July 1, and the White House then said on July 10 that the federal government would not interfere with states’ rights if they pass “prudent AI laws.” While the White House stated last week that another AI-related EO was only speculation, the leaked draft EO raises several issues for AI governance — the combination of principles, laws and policies that relate... Yesterday, the White House pivoted from the draft EO to issue a fact sheet to “accelerate AI for scientific discovery” and an EO launching the Genesis Mission, an integrated platform designed to harness datasets...
The new EO includes directives to combine efforts with the private sector and incorporate security standards. The draft EO reiterates the AI Action Plan edict that “national security demands that we win this [AI] race” and asserts that state legislatures have introduced over 1,000 AI bills that threaten to undermine... The draft declares that the White House “will act to ensure that there is a minimally burdensome national standard — not 50 discordant state ones.” Of the more than “1,000 AI bills” the draft EO threatens to override, it calls out two consumer privacy AI laws — California’s Transparency in Frontier Artificial Intelligence Act (i.e., Senate Bill 53) and...
People Also Search
- Countering AI-powered disinformation through national regulation ...
- AI-driven disinformation: policy recommendations for democratic ...
- Safeguarding Democracy: The Imperative of AI Regulation and Media ...
- Misinformation, Disinformation, and Generative AI: Implications for ...
- Responsibility of stakeholders in counter AI-powered disinformation ...
- Regulatory Sandboxes for Countering AI-Driven Misinformation
- PDF The role of artificial intelligence in disinformation
- AI Companions: The New Frontier of Disinformation
- AI Is Supercharging Disinformation Warfare | Foreign Affairs
- White House Proposes Uniform National Standard AI Law
Your Research Is The Real Superpower - Learn How We
Your research is the real superpower - learn how we maximise its impact through our leading community journals Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom
*Correspondence: Alexander Romanishyn, A.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025
*Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025. AI: A Double-Edged Sword for Democracy in the Information Age The rapid advancement and proliferation of artificial intelligence (AI) present both unprecedented opportunities and significant challenges to the global information ecosystem and, consequently, the foundat...
Fleming Acknowledged The Transformative Power Of AI, Highlighting Its Potential
Fleming acknowledged the transformative power of AI, highlighting its potential to accelerate progress towards achieving the Sustainable Development Goals. She cited examples of AI-powered tools like Food AI, HungerMap LIVE, and PulseSatellite, which are already contributing to humanitarian responses, climate action, and peacebuilding efforts. These examples demonstrate the potential of AI to addr...
Fleming Pointed To The Declining Traditional Media Business Model, Largely
Fleming pointed to the declining traditional media business model, largely attributed to the influence of AI-driven algorithms on social media platforms. This decline contributes to an oversaturation of information, much of which is unverified or deliberately misleading. The resulting inability of the public to distinguish truth from fiction further undermines trust in credible information sources...
The Compact Proposes The Establishment Of An International Scientific Panel
The compact proposes the establishment of an International Scientific Panel on AI and Emerging Technologies, modeled after the Intergovernmental Panel on Climate Change (IPCC), to conduct independent, evidence-based assessments of AI’s risks and opportunities. This panel would bring together experts from various disciplines to ensure that AI development benefits all of humanity. The World Economic...