The Effect Of Ai On Elections Around The World And What To Do About It
As more than 50 countries prepare for elections this year, artificial intelligence–generated media has begun to play a variety of roles in political campaigns, ranging from nefarious to innocuous to positive. With six months until the U.S. general election, examining the use of AI in this year’s major global elections provides Americans with insights on what to expect in our own election and how election officials, legislators, and civil society should... Governments and civil society must work to fortify the electorate against such threats. Tactics include immediate actions such as publicizing corrective information and beefing up online safeguards and legislation, such as stronger curbs on deceptive online political advertising. In doing so, however, policymakers and advocates should keep in mind the various uses of AI in the political process and develop nuanced approaches that focus on the worst impacts without unduly limiting political...
Earlier this year, for instance, AI-generated robocalls imitated President Biden’s voice, targeting New Hampshire voters and discouraging them from voting in the primary. Earlier this year, an AI-generated image falsely depicting former president Trump with convicted sex trafficker Jeffrey Epstein and a young girl began circulating on Twitter.Meanwhile abroad, deepfakes circulated last year in the Slovakian election,... In January, the Chinese government apparently tried to deploy AI deepfakes to meddle in the Taiwanese election. And a wave of malicious AI-generated content is appearing in Britain ahead of its election, scheduled for July 4. One deepfake depicted a BBC newsreader, Sarah Campbell, falsely claiming that British Prime Minister Rishi Sunak promoted a scam investment platform. And as the Indian general election has gotten under way, deepfakes of popular deceased politicians appealing to voters as if they were still alive have become a popular campaign tactic.Sometimes, however, the use of...
This raised some eyebrows given his role in the country’s military dictatorship, but there was no clear deception involved. In Pakistan, the jailed opposition leader, Imran Khan, used an AI-generated video to address his supporters, blunting efforts to silence him by the military and his political rivals. In Belarus, the country’s embattled opposition even ran an AI-generated “candidate” for parliament. The candidate — actually a chatbot that describes itself as a 35-year-old from Minsk — is part of an advocacy campaign to help the opposition, many of whom have gone into exile, reach Belorussian... Policymakers need to weigh these competing interests carefully in determining how to respond to the use of AI in ways that counter the worst potential impacts of deceptive AI without unduly burdening legitimate political... Where possible, these efforts should be in collaboration with social media companies and key participants in the electoral process such as candidates and political parties.For election officials seeking to adopt new AI systems to...
If and when they choose to go ahead, they should integrate simple, effective systems with necessary human oversight, ensuring transparency and documentation. They should also establish robust training for their staff and contingency plans to address possible malfunctions in the systems being used. This should include periodic reviews and adjustments based on performance data and feedback, ensuring the safe and accountable use of AI tools.When crafting legislation to address deepfakes and AI-generated media in elections, policymakers should... It is important to prioritize transparency for manipulated and harmful media, as seen in several state laws and pending bills before the U.S. Congress. This will help ensure voters are informed about the authenticity of the messages they receive and protect the electoral process against the most significant threats.
However, in some cases transparency alone will not suffice, as labels can be ignored or removed. Targeted bans may be necessary to address especially harmful content, such as content intended to confuse and deceive voters about when, where, and how to cast their ballots.As deepfake tools become more sophisticated and... Policymakers must recognize the urgency of the situation and take proactive measures to address this unprecedented challenge while continuing to respect free expression and the desire of political actors of all stripes to use... Citizens and politicians around the world last year used AI for misinformation, memes, and more. 2024 marked a big year for democracy, with elections held in over 70 countries representing nearly half the world’s population. It was also dubbed the year of the “AI election” as experts warned of the havoc that AI-generated disinformation could wreak.
In 2023, AI expert Oren Etzioni told Fortune, “I expect a tsunami of misinformation,” and journalist Maria Ressa said the world is facing a “tech-enabled Armageddon.” Leading tech companies even created the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, citing concerns about how the “intentional and undisclosed generation and distribution of Deceptive AI Election content can... At the beginning of 2024, Rest of World set out to create an AI Elections Tracker with a goal of understanding the different ways that AI tools were being used, with a focus on... Throughout the year, reporters from across the world gathered unique and noteworthy instances of the ways that AI — primarily generative AI — was being used around elections. By the end of the year, the tracker included 60 entries from 15 countries. The entries note the type of media created with AI, including text, image, audio, and video; the platform(s) the content was posted to and spread on; and the country in question.
Rather than a comprehensive database, our AI Elections Tracker compiled a handpicked data set, which offered a glimpse into the myriad ways politicians and citizens used AI to create political content leading up to... Most of the posts we collected are concentrated in the first half of the year because that’s when unique use cases first emerged and when many major elections took place. The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy.
LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see.
Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence. The Brookings Institution, Washington District of Columbia Melanie W. Sisson, Colin Kahl, Sun Chenghao, Xiao Qian
Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay. Yet, these viral stories only show one side of the picture. Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding...
Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it. Nearly half of all participants of the workshop rated their understanding of AI as low. However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops. Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors.
Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind. With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights... For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes.
While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around... 2024 is a landmark election year, with over 60 countries—encompassing nearly half of the global population—heading to the polls. Technology has long been used in electoral processes, such as e-voting, and it is a valuable tool in making this process efficient and secure. However, recent advancements in artificial intelligence, particularly generative AI such as ChatGPT (OpenAI) and Copilot (Microsoft), could have an unprecedented impact on the electoral process. These digital innovations offer opportunities to improve electoral efficiency and voter engagement, but also raise concerns about potential misuse. AI can be used to harness big data to influence voter decision-making.
Its capacity for launching cyberattacks, producing deepfakes, and spreading disinformation could destabilize democratic processes, threaten the integrity of political discourse, and erode public trust. UN Secretary-General António Guterres highlighted AI’s dual nature in his address to the Security Council, noting that while AI can accelerate human development, it also poses significant risks if used maliciously. He stated, “The advent of generative AI could be a defining moment for disinformation and hate speech—undermining truth, facts, and safety, adding a new dimension to the manipulation of human behaviour and contributing to... In this article, we will briefly explore the benefits and challenges that AI is bringing to the electoral process. According to UNESCO’s Guide for Electoral Practitioners: “Elections in Digital Times,” AI has the potential to improve the efficiency and accuracy of elections. It reaches out to voters and engages with them more directly through personalised communication tailored to individual preferences and behaviour.
AI-powered chatbots can provide real-time information about polling locations, candidate platforms, and voting procedures, making the electoral process more accessible and transparent. A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S.
Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions.
The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. Political parties and candidates around the world are increasingly looking at how Artificial Intelligence (AI) can help them win elections. While many are excited about the benefits, there are concerns about the potential negative impacts. This article explores the various ways AI is being used in elections, focuses on Deceptive uses, and highlights the public's reactions to them. AI can support political campaigns in several ways.
People Also Search
- The Effect of AI on Elections Around the World and What to Do About It
- Then and Now: How Does AI Electoral Interference Compare in 2025?
- How AI shaped global elections in 2024 - Rest of World
- AI Is Transforming Politics, Much Like Social Media Did - TIME
- The impact of generative AI in a global election year - Brookings
- What Have we Learned About AI in Elections? - idea.int
- A.I. Is Starting to Wear Down Democracy - The New York Times
- Can artificial intelligence (AI) influence elections?
- AI chatbots can effectively sway voters - in either direction
- The Role of AI in Elections: Benefits and Risks - Simple Science
As More Than 50 Countries Prepare For Elections This Year,
As more than 50 countries prepare for elections this year, artificial intelligence–generated media has begun to play a variety of roles in political campaigns, ranging from nefarious to innocuous to positive. With six months until the U.S. general election, examining the use of AI in this year’s major global elections provides Americans with insights on what to expect in our own election and how e...
Earlier This Year, For Instance, AI-generated Robocalls Imitated President Biden’s
Earlier this year, for instance, AI-generated robocalls imitated President Biden’s voice, targeting New Hampshire voters and discouraging them from voting in the primary. Earlier this year, an AI-generated image falsely depicting former president Trump with convicted sex trafficker Jeffrey Epstein and a young girl began circulating on Twitter.Meanwhile abroad, deepfakes circulated last year in the...
This Raised Some Eyebrows Given His Role In The Country’s
This raised some eyebrows given his role in the country’s military dictatorship, but there was no clear deception involved. In Pakistan, the jailed opposition leader, Imran Khan, used an AI-generated video to address his supporters, blunting efforts to silence him by the military and his political rivals. In Belarus, the country’s embattled opposition even ran an AI-generated “candidate” for parli...
If And When They Choose To Go Ahead, They Should
If and when they choose to go ahead, they should integrate simple, effective systems with necessary human oversight, ensuring transparency and documentation. They should also establish robust training for their staff and contingency plans to address possible malfunctions in the systems being used. This should include periodic reviews and adjustments based on performance data and feedback, ensuring...
However, In Some Cases Transparency Alone Will Not Suffice, As
However, in some cases transparency alone will not suffice, as labels can be ignored or removed. Targeted bans may be necessary to address especially harmful content, such as content intended to confuse and deceive voters about when, where, and how to cast their ballots.As deepfake tools become more sophisticated and... Policymakers must recognize the urgency of the situation and take proactive me...