Candidate Ai The Impact Of Artificial Intelligence On Elections

Bonisiwe Shabane
-
candidate ai the impact of artificial intelligence on elections

Emory experts weigh in on how chatbots, algorithmic targeting, deepfakes and a sea of misinformation — and the tools designed to counter them — might sway how we vote in November and beyond. Or so it seemed. The voice on the other end of the line sounded just like President Joe Biden. He even used his signature catchphrase: “What a bunch of malarkey!” But strangely, he was telling these would-be voters to stay away from the polls, falsely warning them that voting in the primary would... The robocalls didn’t necessarily impact the voting results; Biden still handily won the New Hampshire Democratic primary. Nevertheless, the stunt sent shockwaves through the worlds of politics, media and technology because the misleading message didn’t come from the president — it came from a machine.

The call was what’s known as a deepfake, a recording generated by artificial intelligence (AI), made by a political consultant to sound exactly like Biden and, in this case, apparently suppress voter turnout. It was one of the most high-profile examples of how generative AI is being used in the realm of politics. These deepfakes are affecting both sides of the political aisle. In summer 2023, the early days of the Republican race for the presidency, would-be candidate and Florida Gov. Ron DeSantis shared deepfakes of former President Donald Trump hugging Anthony Fauci, one of the leaders and lightning rods of the U.S.’s COVID-19 response. And, despite being a victim of deepfake tactics like this, Trump has not been afraid to turn around and use them himself.

Famously, this included his recent Truth Social post of AI-manipulated photos that showed pop star Taylor Swift, decked out as Uncle Sam, endorsing him for president. III. Overview: Artificial Intelligence and Elections V. Public Awareness and Individual Responsibility For general and media inquiries and to book our experts, please contact: pr@rstreet.org.

Artificial intelligence (AI) is already having an impact on upcoming U.S. elections and other political races around the globe. Much of the public dialogue focuses on AI’s ability to generate and distribute false information, and government officials are responding by proposing rules and regulations aimed at limiting the technology’s potentially negative effects. However, questions remain regarding the constitutionality of these laws, their effectiveness at limiting the impact of election disinformation, and the opportunities the use of AI presents, such as bolstering cybersecurity and improving the efficiency... While Americans are largely in favor of the government taking action around AI, there is no guarantee that restrictions will curb potential threats. This paper explores AI impacts on the election information environment, cybersecurity, and election administration to define and assess risks and opportunities.

It also evaluates the government’s AI-oriented policy responses to date and assesses the effectiveness of primarily focusing on regulating the use of AI in campaign communications through prohibitions or disclosures. It concludes by offering alternative approaches to increased government-imposed limits, which could empower local election officials to focus on strengthening cyber defenses, build trust with the public as a credible source of election information,... 2024 is a landmark election year, with over 60 countries—encompassing nearly half of the global population—heading to the polls. Technology has long been used in electoral processes, such as e-voting, and it is a valuable tool in making this process efficient and secure. However, recent advancements in artificial intelligence, particularly generative AI such as ChatGPT (OpenAI) and Copilot (Microsoft), could have an unprecedented impact on the electoral process. These digital innovations offer opportunities to improve electoral efficiency and voter engagement, but also raise concerns about potential misuse.

AI can be used to harness big data to influence voter decision-making. Its capacity for launching cyberattacks, producing deepfakes, and spreading disinformation could destabilize democratic processes, threaten the integrity of political discourse, and erode public trust. UN Secretary-General António Guterres highlighted AI’s dual nature in his address to the Security Council, noting that while AI can accelerate human development, it also poses significant risks if used maliciously. He stated, “The advent of generative AI could be a defining moment for disinformation and hate speech—undermining truth, facts, and safety, adding a new dimension to the manipulation of human behaviour and contributing to... In this article, we will briefly explore the benefits and challenges that AI is bringing to the electoral process. According to UNESCO’s Guide for Electoral Practitioners: “Elections in Digital Times,” AI has the potential to improve the efficiency and accuracy of elections.

It reaches out to voters and engages with them more directly through personalised communication tailored to individual preferences and behaviour. AI-powered chatbots can provide real-time information about polling locations, candidate platforms, and voting procedures, making the electoral process more accessible and transparent. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron

Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1,2,3,4,5,6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversation with an AI model that advocated...

We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements7,8,9. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics. Examining the persuasion strategies9 used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques. Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections. This is a preview of subscription content, access via your institution

Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription Receive 51 print issues and online access A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points...

The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science.

In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay.

Yet, these viral stories only show one side of the picture. Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding... Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it.

Nearly half of all participants of the workshop rated their understanding of AI as low. However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops. Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors. Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind.

With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights... For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes. While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around...

People Also Search

Emory Experts Weigh In On How Chatbots, Algorithmic Targeting, Deepfakes

Emory experts weigh in on how chatbots, algorithmic targeting, deepfakes and a sea of misinformation — and the tools designed to counter them — might sway how we vote in November and beyond. Or so it seemed. The voice on the other end of the line sounded just like President Joe Biden. He even used his signature catchphrase: “What a bunch of malarkey!” But strangely, he was telling these would-be v...

The Call Was What’s Known As A Deepfake, A Recording

The call was what’s known as a deepfake, a recording generated by artificial intelligence (AI), made by a political consultant to sound exactly like Biden and, in this case, apparently suppress voter turnout. It was one of the most high-profile examples of how generative AI is being used in the realm of politics. These deepfakes are affecting both sides of the political aisle. In summer 2023, the ...

Famously, This Included His Recent Truth Social Post Of AI-manipulated

Famously, this included his recent Truth Social post of AI-manipulated photos that showed pop star Taylor Swift, decked out as Uncle Sam, endorsing him for president. III. Overview: Artificial Intelligence and Elections V. Public Awareness and Individual Responsibility For general and media inquiries and to book our experts, please contact: pr@rstreet.org.

Artificial Intelligence (AI) Is Already Having An Impact On Upcoming

Artificial intelligence (AI) is already having an impact on upcoming U.S. elections and other political races around the globe. Much of the public dialogue focuses on AI’s ability to generate and distribute false information, and government officials are responding by proposing rules and regulations aimed at limiting the technology’s potentially negative effects. However, questions remain regardin...

It Also Evaluates The Government’s AI-oriented Policy Responses To Date

It also evaluates the government’s AI-oriented policy responses to date and assesses the effectiveness of primarily focusing on regulating the use of AI in campaign communications through prohibitions or disclosures. It concludes by offering alternative approaches to increased government-imposed limits, which could empower local election officials to focus on strengthening cyber defenses, build tr...