Gauging The Ai Threat To Free And Fair Elections
The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions. The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem. Over time, the misuse of these tools is eroding public trust in elections by making it harder to distinguish fact from fiction, intensifying polarization, and undermining confidence in democratic institutions. Understanding and addressing the threats that AI poses requires us to consider both its immediate effects on U.S. elections and its broader, long-term implications.
Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like... Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X. Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely accusing vice-presidential nominee Minnesota Gov. Tim Walz of assault. Similar stories emerged around elections worldwide. In India’s 2024 general elections, AI-generated deepfakes that showed celebrities criticizing Prime Minister Narendra Modi and endorsing opposition parties went viral on platforms such as WhatsApp and YouTube.
During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp. While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-generated content becomes more sophisticated and pervasive. The long-term consequences of AI-driven disinformation go beyond eroding trust — they create a landscape where truth itself becomes contested. As deepfakes and manipulated content grow more sophisticated, bad actors can exploit the confusion, dismissing real evidence as fake and muddying public discourse.
This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence. Over time, this uncertainty weakens democratic institutions, fuels disengagement, and makes societies more vulnerable to manipulation, both from domestic actors and foreign adversaries Results from tabletop exercises in five states during the 2024 election !function(e,n,i,s){var d="InfogramEmbeds";var o=e.getElementsByTagName(n)[0];if(window[d]&&window[d].initialized)window[d].process&&window[d].process();else if(!e.getElementById(i)){var r=e.createElement(n);r.async=1,r.id=i,r.src=s,o.parentNode.insertBefore(r,o)}}(document,"script","infogram-async","https://e.infogram.com/js/dist/embed-loader-min.js"); An adage holds that 2024 is the United States’ first “AI election.” Observers fear that AI will pose serious and novel threats to election administration. These fears are not universally shared, but few would entirely write off the possibility of malicious actors using AI to create chaos after a close election.
The Cybersecurity and Infrastructure Security Agency suggests several ways that bad actors could use AI to threaten election-related processes, facilities, personnel, or vendors. Fabulists could use AI to generate photorealistic fake images of election officials mishandling ballots, or they could create compelling video evidence of violence at polling places to deter voters from casting ballots in the... Bad actors might, for example, synthesize an official’s voice in an advanced spear-phishing attempt. Large language models could generate mountains of burdensome public records requests or fraudulent complaints of irregularities, each demanding precious time from officials. These are only a few of the worries keeping observers up at night. Election workers already face high, if not unprecedented, levels of stress and scrutiny.
They are experiencing a steep increase in false allegations about their work and integrity, harassment and threats to their well-being, and interference from election saboteurs. On top of that, they are training the next cohort of officials after an increase in turnover over the past two decades. Few in the profession have the extra bandwidth to think through the implications of AI for their work. November 18, 2025 / Isabel Linzer, Tim Harper 2024 was billed as “the year of AI elections.” And while widespread access to the technology changed how governments, political campaigns, technology companies, and civil society did their work, generative AI did not cause... That success, however, should not be mistaken for stability.
Over the past year, the political and policy environment surrounding AI has shifted dramatically: norms that once constrained misuse have eroded, regulatory efforts across states have evolved, and voluntary commitments by technology companies to... Taken together, these developments point to a more consequential environment for AI deployment in the United States’ 2026 midterms. AI will likely be more prevalent and impactful in the 2026 elections than it was last year, and the accompanying risks are made greater if the relatively uneventful 2024 cycle leads key actors to... This blog, part of CDT’s Countdown to the Midterms series, examines key trends shaping that shift: the changing political incentives around the use of AI, the weakening of normative guardrails, the growing regulatory efforts... The US 2024 elections were peppered with a handful of high profile negative incidents, such as a Russian-backed deepfake in which vice presidential candidate Tim Walz was baselessly accused of assault. But campaigns also used generative AI in useful and often innocuous ways.
The most common, according to research conducted by CDT, were to analyze polling data and to create text-based copy for fundraising, persuasion, and canvassing. Elsewhere around the world, civil society used the technology to speed up their fact-checking, and campaigns tested ethical boundaries, such as by running an avatar for UK Parliament or resurrecting deceased politicians. A key finding from CDT’s research on how US political campaigns used AI in 2024 was that norms were the biggest limitation on how campaigns used the technology. Specifically, campaigns and consultants worried that using generative AI would make them or their candidates appear less trustworthy. We should not expect those norms — which are unsurprising as society adjusts to new technology — to hold. Though it is not a national election year, taboos around AI use are indeed fading.
More people use the tools than ever before, and politicians are increasingly using AI for their political messaging. The White House, for instance, posted a vulgar video depicting Senate Minority Leader Schumer and House Minority Leader Jeffries that was obviously manipulated. It was emblematic of how politicians are using generative AI to bolster political messaging and assertions about their opponents’ motivations, rather than to deceive viewers about what is real. Representative Mike Collins (R-GA) recently posted a video of this kind depicting Senator Jon Ossoff (D-GA), and Rep. Josh Gottheimer (D-NJ) made an ad depicting him in a boxing ring fighting President Trump. 2024 is a landmark election year, with over 60 countries—encompassing nearly half of the global population—heading to the polls.
Technology has long been used in electoral processes, such as e-voting, and it is a valuable tool in making this process efficient and secure. However, recent advancements in artificial intelligence, particularly generative AI such as ChatGPT (OpenAI) and Copilot (Microsoft), could have an unprecedented impact on the electoral process. These digital innovations offer opportunities to improve electoral efficiency and voter engagement, but also raise concerns about potential misuse. AI can be used to harness big data to influence voter decision-making. Its capacity for launching cyberattacks, producing deepfakes, and spreading disinformation could destabilize democratic processes, threaten the integrity of political discourse, and erode public trust. UN Secretary-General António Guterres highlighted AI’s dual nature in his address to the Security Council, noting that while AI can accelerate human development, it also poses significant risks if used maliciously.
He stated, “The advent of generative AI could be a defining moment for disinformation and hate speech—undermining truth, facts, and safety, adding a new dimension to the manipulation of human behaviour and contributing to... In this article, we will briefly explore the benefits and challenges that AI is bringing to the electoral process. According to UNESCO’s Guide for Electoral Practitioners: “Elections in Digital Times,” AI has the potential to improve the efficiency and accuracy of elections. It reaches out to voters and engages with them more directly through personalised communication tailored to individual preferences and behaviour. AI-powered chatbots can provide real-time information about polling locations, candidate platforms, and voting procedures, making the electoral process more accessible and transparent. Reports from other organizations on the risks that advancements in AI pose to free and fair elections, including some proposed mitigations.
This report gives a high level overview of the impacts AI good have on US democracy. – Norman Eisen, Nicol Turner Lee, Colby Galliher, and Jonathan Katz This report describes the risk of AI-backed voter suppression, and details some potential solutions. This report offers specific ways in which election officials can prepare for the impacts of AI. Nicol Turner Lee, Joseph B. Keller, Cameron F.
Kerry, Aaron Klein, Anton Korinek, Mark MacCarthy, Mark Muro, Chinasa T. Okolo, Courtney C. Radsch, John Villasenor, Darrell M. West, Tom Wheeler, Andrew W. Wyckoff, Rashawn Ray, Mishaela Robison Melanie W.
Sisson, Colin Kahl, Sun Chenghao, Xiao Qian Norman Eisen, Renée Rippberger, Jonathan Katz III. Overview: Artificial Intelligence and Elections V. Public Awareness and Individual Responsibility
For general and media inquiries and to book our experts, please contact: pr@rstreet.org. Artificial intelligence (AI) is already having an impact on upcoming U.S. elections and other political races around the globe. Much of the public dialogue focuses on AI’s ability to generate and distribute false information, and government officials are responding by proposing rules and regulations aimed at limiting the technology’s potentially negative effects. However, questions remain regarding the constitutionality of these laws, their effectiveness at limiting the impact of election disinformation, and the opportunities the use of AI presents, such as bolstering cybersecurity and improving the efficiency... While Americans are largely in favor of the government taking action around AI, there is no guarantee that restrictions will curb potential threats.
This paper explores AI impacts on the election information environment, cybersecurity, and election administration to define and assess risks and opportunities. It also evaluates the government’s AI-oriented policy responses to date and assesses the effectiveness of primarily focusing on regulating the use of AI in campaign communications through prohibitions or disclosures. It concludes by offering alternative approaches to increased government-imposed limits, which could empower local election officials to focus on strengthening cyber defenses, build trust with the public as a credible source of election information,...
People Also Search
- Gauging the AI Threat to Free and Fair Elections
- Preparing for AI & Other Challenges to Election Administration
- Artificial Intelligence's Threat to Democracy - American Bar Association
- Fact Check: Has ai used to threaten democracy - factually.co
- Countdown to the Midterms: The Changing AI Threat Landscape for Elections
- PDF The threat of artificial intelligence to elections worldwide ... - WJAETS
- Can artificial intelligence (AI) influence elections?
- AI Elections Reports - AI Elections Initiative
- AI can strengthen U.S. democracy—and weaken it - Brookings
- Impact of Artificial Intelligence on Elections - R Street Institute
The Run-up To The 2024 Election Was Marked By Predictions
The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions. The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already in...
Incidents Such As Robocalls To Primary Voters In New Hampshire
Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like... Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as makin...
During Brazil’s 2022 Presidential Election, Deepfakes And Bots Were Used
During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp. While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisio...
This Phenomenon, Sometimes Called The Liar’s Dividend, Enables Anyone —
This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence. Over time, this uncertainty weakens democratic institutions, fuels disengagement, and makes societies more vulnerable to manipulation, both from domestic actors and foreign adversaries Results from tabletop...
The Cybersecurity And Infrastructure Security Agency Suggests Several Ways That
The Cybersecurity and Infrastructure Security Agency suggests several ways that bad actors could use AI to threaten election-related processes, facilities, personnel, or vendors. Fabulists could use AI to generate photorealistic fake images of election officials mishandling ballots, or they could create compelling video evidence of violence at polling places to deter voters from casting ballots in...