When It Comes To Understanding Ai S Impact On Elections We Re Still
Daniel S. Schiff, Kaylyn Jackson Schiff, Natália Bueno Melanie W. Sisson, Colin Kahl, Sun Chenghao, Xiao Qian Greater transparency around AI-generated political advertising would transform researchers' ability to understand its potential effects on democracy and elections. This article was originally published on Brookings.
Ahead of the 2024 U.S. election, there was widespread fear that generative artificial intelligence (AI) presented an unprecedented threat to democracy. Just six weeks before the election, more than half of Americans said they were “extremely or very concerned” that AI would be used to spread misleading information. Intelligence officials warned that these technologies would be used by foreign influence campaigns to undermine trust in democracy, and that growing access to AI tools would lead to a deluge of political deepfakes. This premature, “sky is falling” narrative was based on very little evidence, something we warned about a year ago. But while it seems clear that the worst predictions about AI didn’t come to pass, it’s similarly impetuous to claim that 2024 was the “AI election that wasn’t,” that “we were deepfaked by deepfakes,”...
In reality, too little data is available to draw concrete conclusions. We know this because, for the past several months, our research team has tried to build a comprehensive database tracking the use of AI in political communications. But despite our best efforts, we found this task nearly impossible, in part due to a lack of transparency from online platforms. Overall, we found just 71 examples. Other researchers and journalists have tried to track election-related AI content as well with similar outcomes. But it doesn’t need to be this way.
As lawmakers at the state and federal level continue to regulate AI, there are common-sense changes policymakers and platforms can make so that we aren’t flying blind trying to understand what impact AI has... Ahead of the 2024 U.S. election, there was widespread fear that generative artificial intelligence (AI) presented an unprecedented threat to democracy. Just six weeks before the election, more than half of Americans said they were “extremely or very concerned” that AI would be used to spread misleading information. Intelligence officials warned that these technologies would be used by foreign influence campaigns to undermine trust in democracy, and that growing access to AI tools would lead to a deluge of political deepfakes. This premature, “sky is falling” narrative was based on very little evidence, something we warned about.
But while it seems clear that the worst predictions about AI didn’t come to pass, it’s similarly impetuous to claim that 2024 was the “AI election that wasn’t,” that “we were deepfaked by deepfakes,”... In reality, too little data is available to draw concrete conclusions. We know this because, for the past several months, our research team has tried to build a comprehensive database tracking the use of AI in political communications. But despite our best efforts, we found this task nearly impossible, in part due to a lack of transparency from online platforms. Benton Institute for Broadband & Society 1041 Ridge Rd, Unit 214 Wilmette, IL 60091 © 1994-2025 Benton Institute for Broadband & Society.
All Rights Reserved. The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions. The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem. Over time, the misuse of these tools is eroding public trust in elections by making it harder to distinguish fact from fiction, intensifying polarization, and undermining confidence in democratic institutions. Understanding and addressing the threats that AI poses requires us to consider both its immediate effects on U.S.
elections and its broader, long-term implications. Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like... Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X. Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely accusing vice-presidential nominee Minnesota Gov. Tim Walz of assault. Similar stories emerged around elections worldwide.
In India’s 2024 general elections, AI-generated deepfakes that showed celebrities criticizing Prime Minister Narendra Modi and endorsing opposition parties went viral on platforms such as WhatsApp and YouTube. During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp. While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-generated content becomes more sophisticated and pervasive. The long-term consequences of AI-driven disinformation go beyond eroding trust — they create a landscape where truth itself becomes contested.
As deepfakes and manipulated content grow more sophisticated, bad actors can exploit the confusion, dismissing real evidence as fake and muddying public discourse. This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence. Over time, this uncertainty weakens democratic institutions, fuels disengagement, and makes societies more vulnerable to manipulation, both from domestic actors and foreign adversaries Experts argue that AI’s impact on elections remains unclear due to data gaps, highlighting the need for ad transparency. AI’s role in elections remains uncertain due to limited data, gaps in research, and a lack of transparency in political ad disclosures, making it difficult to draw definitive conclusions. Despite widespread fears and early warnings, AI did not have the catastrophic impact predicted on the 2024 U.S.
election, but the absence of comprehensive data means the true extent of its influence remains unclear. Improving transparency in political advertising, including clear disclosures of AI use, could provide researchers with the necessary tools to better understand AI’s potential effects on democracy and elections. Emory experts weigh in on how chatbots, algorithmic targeting, deepfakes and a sea of misinformation — and the tools designed to counter them — might sway how we vote in November and beyond. Or so it seemed. The voice on the other end of the line sounded just like President Joe Biden. He even used his signature catchphrase: “What a bunch of malarkey!” But strangely, he was telling these would-be voters to stay away from the polls, falsely warning them that voting in the primary would...
The robocalls didn’t necessarily impact the voting results; Biden still handily won the New Hampshire Democratic primary. Nevertheless, the stunt sent shockwaves through the worlds of politics, media and technology because the misleading message didn’t come from the president — it came from a machine. The call was what’s known as a deepfake, a recording generated by artificial intelligence (AI), made by a political consultant to sound exactly like Biden and, in this case, apparently suppress voter turnout. It was one of the most high-profile examples of how generative AI is being used in the realm of politics. These deepfakes are affecting both sides of the political aisle. In summer 2023, the early days of the Republican race for the presidency, would-be candidate and Florida Gov.
Ron DeSantis shared deepfakes of former President Donald Trump hugging Anthony Fauci, one of the leaders and lightning rods of the U.S.’s COVID-19 response. And, despite being a victim of deepfake tactics like this, Trump has not been afraid to turn around and use them himself. Famously, this included his recent Truth Social post of AI-manipulated photos that showed pop star Taylor Swift, decked out as Uncle Sam, endorsing him for president. Why claims about the impact of generative AI on elections have been overblown A project studying how advanced AI systems may harm, or help strengthen, democratic freedoms Prominent voices worry that generative artificial intelligence (GenAI) will negatively impact elections worldwide and trigger a misinformation apocalypse.
A recurrent fear is that GenAI will make it easier to influence voters and facilitate the creation and dissemination of potent mis- and disinformation. We argue that despite the incredible capabilities of GenAI systems, their influence on election outcomes has been overestimated. Looking back at 2024, the predicted outsized effects of GenAI did not happen and were overshadowed by traditional sources of influence. We review current evidence on the impact of GenAI in the 2024 elections and identify several reasons why the impact of GenAI on elections has been overblown. These include the inherent challenges of mass persuasion, the complexity of media effects and people’s interaction with technology, the difficulty of reaching target audiences, and the limited effectiveness of AI-driven microtargeting in political campaigns. Additionally, we argue that the socioeconomic, cultural, and personal factors that shape voting behavior outweigh the influence of AI-generated content.
We further analyze the bifurcated discourse on GenAI’s role in elections, framing it as part of the ongoing “cycle of technology panics.” While acknowledging AI’s risks, such as amplifying social inequalities, we argue that... The paper calls for a recalibration of the narratives around AI and elections, proposing a nuanced approach that considers AI within broader sociopolitical contexts. The increasing public availability of generative artificial intelligence (GenAI) systems, such as OpenAI’s ChatGPT, Google’s Gemini, and a slew of others has led to a resurgence of concerns about the impact of AI and... Leading voices from politics, business, and the media twice listed “adverse outcomes of AI technologies” as having a potentially severe impact in the next two years (together with “mis- and disinformation”) in the World... The public is worried as well. A recent survey of eight countries, including Brazil, Japan, the U.K., and the U.S.
found that 84 percent of people were concerned about the use of AI to create fake content (Ejaz et al., 2024). Meanwhile, a large survey of AI researchers found that 86 percent were significantly or extremely concerned about AI and the spread of false information, and 79 percent about manipulation of large-scale public opinion trends... The main worry present in all these contexts is that AI will make it easier to create and target potent mis- and disinformation and propaganda and manipulate voters more effectively. The integration of foundation models, particularly AI chatbots, into various digital media and their growing use for online searches, interaction with information and news, and use as personal assistants is also a growing concern,... A recurrent theme is the impact of AI on national elections. Initial predictions warned that GenAI would propel the world toward a “tech-enabled Armageddon” (Scott, 2023), where “elections get screwed up” (Verma & Zakrzewski, 2024), and that “anybody who’s not worried [was] not paying attention”...
We critically examine these claims against the backdrop of the 2023-2024 global election cycle, during which nearly half of the world’s population had the opportunity to participate in elections, including in high-stakes contests in... and Brazil. AI is eminently capable of political persuasion and could automate it at a mass scale. We are not prepared. In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary.
It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence. Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease. AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason.
People Also Search
- When it comes to understanding AI's impact on elections, we're still ...
- Gauging the AI Threat to Free and Fair Elections
- Candidate Ai: the Impact of Artificial Intelligence on Elections
- AI's Underwhelming Impact on the 2024 Elections - TIME
- Don't Panic (Yet): Assessing the Evidence and Discourse Around ...
- The era of AI persuasion in elections is about to begin
- The AI Revolution: Navigating Its Impact on Elections
Daniel S. Schiff, Kaylyn Jackson Schiff, Natália Bueno Melanie W.
Daniel S. Schiff, Kaylyn Jackson Schiff, Natália Bueno Melanie W. Sisson, Colin Kahl, Sun Chenghao, Xiao Qian Greater transparency around AI-generated political advertising would transform researchers' ability to understand its potential effects on democracy and elections. This article was originally published on Brookings.
Ahead Of The 2024 U.S. Election, There Was Widespread Fear
Ahead of the 2024 U.S. election, there was widespread fear that generative artificial intelligence (AI) presented an unprecedented threat to democracy. Just six weeks before the election, more than half of Americans said they were “extremely or very concerned” that AI would be used to spread misleading information. Intelligence officials warned that these technologies would be used by foreign infl...
In Reality, Too Little Data Is Available To Draw Concrete
In reality, too little data is available to draw concrete conclusions. We know this because, for the past several months, our research team has tried to build a comprehensive database tracking the use of AI in political communications. But despite our best efforts, we found this task nearly impossible, in part due to a lack of transparency from online platforms. Overall, we found just 71 examples....
As Lawmakers At The State And Federal Level Continue To
As lawmakers at the state and federal level continue to regulate AI, there are common-sense changes policymakers and platforms can make so that we aren’t flying blind trying to understand what impact AI has... Ahead of the 2024 U.S. election, there was widespread fear that generative artificial intelligence (AI) presented an unprecedented threat to democracy. Just six weeks before the election, mo...
But While It Seems Clear That The Worst Predictions About
But while it seems clear that the worst predictions about AI didn’t come to pass, it’s similarly impetuous to claim that 2024 was the “AI election that wasn’t,” that “we were deepfaked by deepfakes,”... In reality, too little data is available to draw concrete conclusions. We know this because, for the past several months, our research team has tried to build a comprehensive database tracking the ...