How Generative Ai Is Redefining Election Campaigns
GenAI is rewriting the rules of electioneering, turning campaigns into hyper-targeted, multilingual persuasion machines that blur the line between outreach and manipulation The integration of Generative Artificial Intelligence (AI) into election campaigns has redefined the very architecture of political communication and persuasion. As AI becomes deeply intertwined with the campaign process, its role extends beyond strategy to shaping voter perceptions. It is introducing unprecedented precision, scale, and personalisation in how campaigns engage with voters and influence public opinion across digital platforms. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics. As detailed by Florian Foos (2024), Gen AI offers significant opportunities to reduce costs in modern campaigns by assisting with the drafting of campaign communications, such as emails and text messages.
A primary use case for this transformation is the capacity of multilingual AI systems to facilitate direct, dynamic exchanges with voters across linguistic and cultural boundaries. The Bhashini initiative, first introduced in India on 18 December 2023, is a prime example of this use case. Prime Minister (PM) Narendra Modi used this tool on this date during his address at Kashi Tamil Sangamam in Varanasi to translate his speech to Tamil live. With the integration of AI-driven communication tools, campaigns can now fundamentally alter conventional interaction paradigms, moving from broad mass messaging toward more personal, innovative, and highly targeted forms of digital outreach. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics. The disruptive potential of AI in this domain is significantly amplified when campaigners can access individual-level personal contact data.
AI-powered messaging tools can generate and deliver personalised content at scale, raising the possibility of both positive engagement and concerning intrusions into voter privacy. Notable examples from recent electoral practice include the widespread use of AI-generated fundraising emails in United States campaigns, as well as the deployment of AI-generated videos of political candidates in India making highly tailored... These instances underscore the increasing prevalence and sophistication of dynamic, digital conversations between campaigns and their target electorate. The Brookings Institution, Washington District of Columbia Melanie W. Sisson, Colin Kahl, Sun Chenghao, Xiao Qian
AI is eminently capable of political persuasion and could automate it at a mass scale. We are not prepared. In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary. It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence.
Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease. AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason. But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people.
And new research published this week shows just how powerful that persuasion can be. In two large peer-reviewed studies, AI chatbots shifted voters’ views by a substantial margin, far more than traditional political advertising tends to do. In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows.
The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows.
Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections.
Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence.
Generative AI in Electoral Campaigns: Mapping Global Patterns This Summary for Policymakers provides a high-level précis of the Technical Paper, The Role of Generative AI Use in 2024 Elections Worldwide. GenAI is being deployed in many ways during elections, ranging from the creation of deepfake video and audio messages, to sophisticated voter targeting. What are the implications of GenAI for election administration and voter participation around the world? This assessment delivers the first global, data-driven analysis of its kind, designed to inform policy recommendations that enhance election administration, foster trust in electoral processes, and boost voter turnout. Based on an analysis of an original data set of 215 incidents, covering all 50 countries holding competitive national elections in 2024, we find that:
The International Panel on the Information Environment (IPIE) is an independent and global science organization providing scientific knowledge about the health of the world's information environment. Based in Switzerland, the IPIE offers policymakers, industry, and civil society actionable scientific assessments about threats to the information environment, including AI bias, algorithmic manipulation, and disinformation. The IPIE is the only scientific body systematically organizing, evaluating, and elevating research with the broad aim of improving the global information environment. Hundreds of researchers worldwide contribute to the IPIE's reports. AI Policy & Governance, Elections & Democracy November 18, 2025 / Tim Harper, Guest Post
This report was also authored by Dean Jackson and Zelly Martin. Generative AI (genAI) poses a number of risks to elections, such as amplifying disinformation, facilitating foreign interference, and automating voter suppression campaigns. All of these capabilities have been utilized in elections around the world in recent years, including in the U.S. Some instances include a genAI robocall purporting to be President Biden urging Americans not to vote during a primary, a genAI campaign aimed at manipulating pro-Ukraine Americans, and Russian attempts to interfere in the... election using genAI. But many threats that were postulated before the 2024 elections did not appear to manifest in large volumes, and ultimately the potential threats from genAI did not impact the outcomes of U.S.
elections. Still, the risks from genAI are real and affect a variety of election stakeholders: election officials, candidates and campaigns, voters, government institutions, and technology companies. Among those stakeholders, genAI poses unique opportunities and challenges for political campaigns. Campaigns can be targets of deepfakes (defined as images, videos, or audio depicting someone doing or saying something that they did not do or say), gen-AI developed phishing campaigns, and spoof websites. Campaigns also stand to benefit from AI tools, which can help create campaign-related content, translate materials, analyze data, and generally act as a force multiplier across many campaign operations. Generative AI (GenAI) has emerged as a transformative force in elections playing out across the world.
In a series of reports, the Center for Media Engagement investigates GenAI’s role before, during, and after several key global elections in 2024. The reports examine the potential impacts of GenAI on key democratic processes in the U.S., Europe, India, Mexico, and South Africa. These insights are critical to groups working to sustain and advance democracies in the face of constant transformation of the digital environment and associated communication processes. Below we share the emerging trends developing around elections and AI in each of these regions. To view the region’s report in full, click on the link in the title. The U.S.: GenAI, Disinformation, and Data Rights in U.S.
Elections Europe: Political Deepfakes and Misleading Chatbots – Understanding the Use of GenAI in Recent European Elections
People Also Search
- How Generative AI Is Redefining Election Campaigns
- The impact of generative AI in a global election year - Brookings
- The era of AI persuasion in elections is about to begin
- AI Chatbots Shown to Sway Voters, Raising New Fears about Election ...
- AI Is Transforming Politics, Much Like Social Media Did - TIME
- Generative AI in Electoral Campaigns: Mapping Global Patterns
- PDF Preparing for Generative AI in the 2024 Election: Recommendations and ...
- Promise and Peril: Generative AI's Experimental Debut in U.S. Political ...
- Generative Artificial Intelligence and Elections
- PDF Generative AI and the Future of Elections
GenAI Is Rewriting The Rules Of Electioneering, Turning Campaigns Into
GenAI is rewriting the rules of electioneering, turning campaigns into hyper-targeted, multilingual persuasion machines that blur the line between outreach and manipulation The integration of Generative Artificial Intelligence (AI) into election campaigns has redefined the very architecture of political communication and persuasion. As AI becomes deeply intertwined with the campaign process, its r...
A Primary Use Case For This Transformation Is The Capacity
A primary use case for this transformation is the capacity of multilingual AI systems to facilitate direct, dynamic exchanges with voters across linguistic and cultural boundaries. The Bhashini initiative, first introduced in India on 18 December 2023, is a prime example of this use case. Prime Minister (PM) Narendra Modi used this tool on this date during his address at Kashi Tamil Sangamam in Va...
AI-powered Messaging Tools Can Generate And Deliver Personalised Content At
AI-powered messaging tools can generate and deliver personalised content at scale, raising the possibility of both positive engagement and concerning intrusions into voter privacy. Notable examples from recent electoral practice include the widespread use of AI-generated fundraising emails in United States campaigns, as well as the deployment of AI-generated videos of political candidates in India...
AI Is Eminently Capable Of Political Persuasion And Could Automate
AI is eminently capable of political persuasion and could automate it at a mass scale. We are not prepared. In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary. It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence.
Today, The Technology Behind That Hoax Looks Quaint. Tools Like
Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease. AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason. But that’s only half the ...