Can Artificial Intelligence Ai Influence Elections
2024 is a landmark election year, with over 60 countries—encompassing nearly half of the global population—heading to the polls. Technology has long been used in electoral processes, such as e-voting, and it is a valuable tool in making this process efficient and secure. However, recent advancements in artificial intelligence, particularly generative AI such as ChatGPT (OpenAI) and Copilot (Microsoft), could have an unprecedented impact on the electoral process. These digital innovations offer opportunities to improve electoral efficiency and voter engagement, but also raise concerns about potential misuse. AI can be used to harness big data to influence voter decision-making. Its capacity for launching cyberattacks, producing deepfakes, and spreading disinformation could destabilize democratic processes, threaten the integrity of political discourse, and erode public trust.
UN Secretary-General António Guterres highlighted AI’s dual nature in his address to the Security Council, noting that while AI can accelerate human development, it also poses significant risks if used maliciously. He stated, “The advent of generative AI could be a defining moment for disinformation and hate speech—undermining truth, facts, and safety, adding a new dimension to the manipulation of human behaviour and contributing to... In this article, we will briefly explore the benefits and challenges that AI is bringing to the electoral process. According to UNESCO’s Guide for Electoral Practitioners: “Elections in Digital Times,” AI has the potential to improve the efficiency and accuracy of elections. It reaches out to voters and engages with them more directly through personalised communication tailored to individual preferences and behaviour. AI-powered chatbots can provide real-time information about polling locations, candidate platforms, and voting procedures, making the electoral process more accessible and transparent.
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers.
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S.
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1,2,3,4,5,6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversation with an AI model that advocated... We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements7,8,9. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics.
Examining the persuasion strategies9 used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques. Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription
Receive 51 print issues and online access Voters change their opinions after interacting with an AI chatbot – but, encouragingly, it seems that AIs rely on facts to influence people AI chatbots may have the power to influence voters’ opinions Does the persuasive power of AI chatbots spell the beginning of the end for democracy? In one of the largest surveys to date exploring how these tools can influence voter attitudes, AI chatbots were more persuasive than traditional political campaign tools including advertisements and pamphlets, and as persuasive as... But at least some researchers identify reasons for optimism in the way in which the AI tools shifted opinions.
We have already seen that AI chatbots like ChatGPT can be highly convincing, persuading conspiracy theorists that their beliefs are incorrect and winning more support for a viewpoint when pitted against human debaters. This persuasive power has naturally led to fears that AI could place its digital thumb on the scale in consequential elections, or that bad actors could marshal these chatbots to steer users towards their... The bad news is that these fears may not be totally baseless. In a study of thousands of voters taking part in recent US, Canadian and Polish presidential elections, David Rand at the Massachusetts Institute of Technology and his colleagues found that AI chatbots were surprisingly... Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay.
Yet, these viral stories only show one side of the picture. Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding... Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it.
Nearly half of all participants of the workshop rated their understanding of AI as low. However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops. Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors. Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind.
With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights... For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes. While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around... Co-hosts Archon Fung and Stephen Richer look back at the last five months of headlines as they celebrate the twentieth episode of Terms of Engagement.
Archon Fung and Stephen Richer are joined by Michelle Feldman, political director at Mobile Voting, a nonprofit, nonpartisan initiative working to make voting easier with expanded access to mobile voting. Archon Fung and Stephen Richer discuss whether fusion voting expands representation and strengthens smaller parties—or whether it muddies party lines and confuses voters. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively.
GenAI is rewriting the rules of electioneering, turning campaigns into hyper-targeted, multilingual persuasion machines that blur the line between outreach and manipulation The integration of Generative Artificial Intelligence (AI) into election campaigns has redefined the very architecture of political communication and persuasion. As AI becomes deeply intertwined with the campaign process, its role extends beyond strategy to shaping voter perceptions. It is introducing unprecedented precision, scale, and personalisation in how campaigns engage with voters and influence public opinion across digital platforms. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics. As detailed by Florian Foos (2024), Gen AI offers significant opportunities to reduce costs in modern campaigns by assisting with the drafting of campaign communications, such as emails and text messages.
A primary use case for this transformation is the capacity of multilingual AI systems to facilitate direct, dynamic exchanges with voters across linguistic and cultural boundaries. The Bhashini initiative, first introduced in India on 18 December 2023, is a prime example of this use case. Prime Minister (PM) Narendra Modi used this tool on this date during his address at Kashi Tamil Sangamam in Varanasi to translate his speech to Tamil live. With the integration of AI-driven communication tools, campaigns can now fundamentally alter conventional interaction paradigms, moving from broad mass messaging toward more personal, innovative, and highly targeted forms of digital outreach. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics. The disruptive potential of AI in this domain is significantly amplified when campaigners can access individual-level personal contact data.
AI-powered messaging tools can generate and deliver personalised content at scale, raising the possibility of both positive engagement and concerning intrusions into voter privacy. Notable examples from recent electoral practice include the widespread use of AI-generated fundraising emails in United States campaigns, as well as the deployment of AI-generated videos of political candidates in India making highly tailored... These instances underscore the increasing prevalence and sophistication of dynamic, digital conversations between campaigns and their target electorate. The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information.
Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet.
At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence. Emory experts weigh in on how chatbots, algorithmic targeting, deepfakes and a sea of misinformation — and the tools designed to counter them — might sway how we vote in November and beyond. Or so it seemed.
People Also Search
- Can artificial intelligence (AI) influence elections?
- AI chatbots can effectively sway voters - in either direction
- Persuading voters using human-artificial intelligence dialogues
- AI can influence voters' minds. What does that mean for democracy?
- What Have we Learned About AI in Elections? - idea.int
- AI on the Ballot: How Artificial Intelligence Is Already Changing ...
- How Generative AI Is Redefining Election Campaigns
- AI Is Transforming Politics, Much Like Social Media Did - TIME
- Candidate Ai: the Impact of Artificial Intelligence on Elections
- Artificial Intelligence (AI) in Elections and Campaigns
2024 Is A Landmark Election Year, With Over 60 Countries—encompassing
2024 is a landmark election year, with over 60 countries—encompassing nearly half of the global population—heading to the polls. Technology has long been used in electoral processes, such as e-voting, and it is a valuable tool in making this process efficient and secure. However, recent advancements in artificial intelligence, particularly generative AI such as ChatGPT (OpenAI) and Copilot (Micros...
UN Secretary-General António Guterres Highlighted AI’s Dual Nature In His
UN Secretary-General António Guterres highlighted AI’s dual nature in his address to the Security Council, noting that while AI can accelerate human development, it also poses significant risks if used maliciously. He stated, “The advent of generative AI could be a defining moment for disinformation and hate speech—undermining truth, facts, and safety, adding a new dimension to the manipulation of...
A Short Interaction With A Chatbot Can Meaningfully Shift A
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs)...
“But Those Claims Aren’t Necessarily Accurate – And Even Arguments
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature...
Presidential Election, The 2025 Canadian Federal Election And The 2025
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1,2,3,4,5,6. We inform these concerns using pre-registered experiments to assess the ability of large language models to infl...