Artificial Intelligence Ai In Elections And Campaigns
AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.
Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. GenAI is rewriting the rules of electioneering, turning campaigns into hyper-targeted, multilingual persuasion machines that blur the line between outreach and manipulation The integration of Generative Artificial Intelligence (AI) into election campaigns has redefined the very architecture of political communication and persuasion. As AI becomes deeply intertwined with the campaign process, its role extends beyond strategy to shaping voter perceptions. It is introducing unprecedented precision, scale, and personalisation in how campaigns engage with voters and influence public opinion across digital platforms. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics.
As detailed by Florian Foos (2024), Gen AI offers significant opportunities to reduce costs in modern campaigns by assisting with the drafting of campaign communications, such as emails and text messages. A primary use case for this transformation is the capacity of multilingual AI systems to facilitate direct, dynamic exchanges with voters across linguistic and cultural boundaries. The Bhashini initiative, first introduced in India on 18 December 2023, is a prime example of this use case. Prime Minister (PM) Narendra Modi used this tool on this date during his address at Kashi Tamil Sangamam in Varanasi to translate his speech to Tamil live. With the integration of AI-driven communication tools, campaigns can now fundamentally alter conventional interaction paradigms, moving from broad mass messaging toward more personal, innovative, and highly targeted forms of digital outreach. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics.
The disruptive potential of AI in this domain is significantly amplified when campaigners can access individual-level personal contact data. AI-powered messaging tools can generate and deliver personalised content at scale, raising the possibility of both positive engagement and concerning intrusions into voter privacy. Notable examples from recent electoral practice include the widespread use of AI-generated fundraising emails in United States campaigns, as well as the deployment of AI-generated videos of political candidates in India making highly tailored... These instances underscore the increasing prevalence and sophistication of dynamic, digital conversations between campaigns and their target electorate. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Last week’s leak of the U.S.
Department of Education’s proposed “Compact for Academic Excellence in Higher Education” drew intense reactions across academia. Critics call it government overreach threatening free expression, while supporters see a chance for reform and renewed trust between universities and policymakers. Danielle Allen, James Bryant Conant University Professor at Harvard University, director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation, weighs in. Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful.
Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions.
“LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed...
They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay. Yet, these viral stories only show one side of the picture.
Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding... Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it. Nearly half of all participants of the workshop rated their understanding of AI as low.
However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops. Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors. Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind. With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights...
For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes. While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around... Emory experts weigh in on how chatbots, algorithmic targeting, deepfakes and a sea of misinformation — and the tools designed to counter them — might sway how we vote in November and beyond. Or so it seemed.
The voice on the other end of the line sounded just like President Joe Biden. He even used his signature catchphrase: “What a bunch of malarkey!” But strangely, he was telling these would-be voters to stay away from the polls, falsely warning them that voting in the primary would... The robocalls didn’t necessarily impact the voting results; Biden still handily won the New Hampshire Democratic primary. Nevertheless, the stunt sent shockwaves through the worlds of politics, media and technology because the misleading message didn’t come from the president — it came from a machine. The call was what’s known as a deepfake, a recording generated by artificial intelligence (AI), made by a political consultant to sound exactly like Biden and, in this case, apparently suppress voter turnout. It was one of the most high-profile examples of how generative AI is being used in the realm of politics.
These deepfakes are affecting both sides of the political aisle. In summer 2023, the early days of the Republican race for the presidency, would-be candidate and Florida Gov. Ron DeSantis shared deepfakes of former President Donald Trump hugging Anthony Fauci, one of the leaders and lightning rods of the U.S.’s COVID-19 response. And, despite being a victim of deepfake tactics like this, Trump has not been afraid to turn around and use them himself. Famously, this included his recent Truth Social post of AI-manipulated photos that showed pop star Taylor Swift, decked out as Uncle Sam, endorsing him for president. Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence.
Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the political landscape, whether by trolls on social... AI is poised to play a more volatile role than ever before in America’s next federal election in 2026. We can already see how different groups of political actors are approaching AI. Professional campaigners are using AI to accelerate the traditional tactics of electioneering; organizers are using it to reinvent how movements are built; and citizens are using it both to express themselves and amplify their... Because there are so few rules, and so little prospect of regulatory action, around AI’s role in politics, there is no oversight of these activities, and no safeguards against the dramatic potential impacts for... Campaigners—messengers, ad buyers, fundraisers, and strategists—are focused on efficiency and optimization.
To them, AI is a way to augment or even replace expensive humans who traditionally perform tasks like personalizing emails, texting donation solicitations, and deciding what platforms and audiences to target. This is an incremental evolution of the computerization of campaigning that has been underway for decades. For example, the progressive campaign infrastructure group Tech for Campaigns claims it used AI in the 2024 cycle to reduce the time spent drafting fundraising solicitations by one-third. If AI is working well here, you won’t notice the difference between an annoying campaign solicitation written by a human staffer and an annoying one written by AI. But AI is scaling these capabilities, which is likely to make them even more ubiquitous. This will make the biggest difference for challengers to incumbents in safe seats, who see AI as both a tacitly useful tool and an attention-grabbing way to get their race into the headlines.
Jason Palmer, the little-known Democratic primary challenger to Joe Biden, successfully won the American Samoa primary while extensively leveraging AI avatars for campaigning. The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow.
At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect.
People Also Search
- Artificial Intelligence (AI) in Elections and Campaigns
- AI Chatbots Shown to Sway Voters, Raising New Fears about Election ...
- How Generative AI Is Redefining Election Campaigns
- The Role of AI in the 2024 Elections - Ash Center
- AI chatbots can effectively sway voters - in either direction
- What Have we Learned About AI in Elections? - idea.int
- Candidate Ai: the Impact of Artificial Intelligence on Elections
- PDF Preparing for Generative AI in the 2024 Election: Recommendations and ...
- AI Is Changing How Politics Is Practiced in America
- AI Is Transforming Politics, Much Like Social Media Did - TIME
AI Chatbots Are Shockingly Good At Political Persuasion Chatbots Can
AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.
Forget Door Knocks And Phone Banks—chatbots Could Be The Future
Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. GenAI is rewriting the rules of electioneering, turning campaigns into hyper-targeted, multilingual persuasion machines that blur the line between outreach and manipulation The integration of Generative Artificial Intelligence (AI) into election campaigns has redefined the very architecture of politi...
As Detailed By Florian Foos (2024), Gen AI Offers Significant
As detailed by Florian Foos (2024), Gen AI offers significant opportunities to reduce costs in modern campaigns by assisting with the drafting of campaign communications, such as emails and text messages. A primary use case for this transformation is the capacity of multilingual AI systems to facilitate direct, dynamic exchanges with voters across linguistic and cultural boundaries. The Bhashini i...
The Disruptive Potential Of AI In This Domain Is Significantly
The disruptive potential of AI in this domain is significantly amplified when campaigners can access individual-level personal contact data. AI-powered messaging tools can generate and deliver personalised content at scale, raising the possibility of both positive engagement and concerning intrusions into voter privacy. Notable examples from recent electoral practice include the widespread use of ...
Department Of Education’s Proposed “Compact For Academic Excellence In Higher
Department of Education’s proposed “Compact for Academic Excellence in Higher Education” drew intense reactions across academia. Critics call it government overreach threatening free expression, while supporters see a chance for reform and renewed trust between universities and policymakers. Danielle Allen, James Bryant Conant University Professor at Harvard University, director of the Democratic ...