Ai In Political Campaigns How It S Being Used And The Ethical

Bonisiwe Shabane
-
ai in political campaigns how it s being used and the ethical

The use of artificial intelligence in political campaigns and messaging is ramping up. Already in this 2024 presidential race is AI being used to create fake robocalls and news stories and to generate campaign speeches and fundraising emails. The use of AI in political messaging has raised several alarms among experts, as there are currently no federal rules when it comes to using AI generated content in political material. Peter Loge is the director of the GW School of Media and Public Affairs. Loge has nearly 30 years of experience in politics and communications, including a presidential appointment at the Food and Drug Administration and senior positions for Sen. Edward Kennedy and three members of the U.S.

House of Representatives. He currently leads the Project on Ethics in Political Communication at the GW School of Media and Public Affairs and continues to advise advocates and organizations. Loge is an expert in communications and political strategy. Loge says AI is being used in a number of ways for political campaigns right now and the use of this emerging technology can ultimately undermine public trust. “Campaigns are using artificial intelligence to predict where voters are, what they care about and how to reach them but they’re also writing fundraising emails, generating first drafts of scripts, first drafts of speeches... “There’s a lot of ethical concerns with AI in campaigns.

The basic rule of thumb is, there aren’t AI ethics that different from everybody else’s ethics. You have a set of ethics. In a campaign, you should aim to persuade and inform, not deceive and divide. That’s true with AI, with mail, with televions, with speeches,” Loge explains. “A lot of the questions we’re asking about AI are the same questions we’ve asked about rhetoric and persuasion for thousands of years.” The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows.

Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections.

Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence.

In 2024, a viral deepfake sparked panic, highlighting AI’s role in misinformation and pushing campaigns to rethink voter outreach and public opinion. While AI in political campaigns can help candidates connect with voters more effectively through data analysis and targeted messaging, it also brings new ethical concerns. As governments and stakeholders grapple with these challenges, AI in politics shapes the future of elections and governance worldwide. AI in political campaigns has evolved from a niche tool to a central strategy. In 2024, elections in 64 countries, representing nearly half the global population, will see AI play a major role. As Danielle Allen from Harvard noted, “Public anxiety about AI’s impact on elections is high.”

AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.

Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. In the age of rapid digital transformation, political campaigns are no longer confined to billboards, door-knocking, and public rallies. Campaigning has gone high-tech, and the most revolutionary change is being driven by AI in political campaigns. Artificial Intelligence—once considered the domain of tech companies and research labs—is now being used to sway public opinion, mobilize voters, and even rewrite the rules of democratic engagement. AI tools are helping political strategists refine their messaging, identify swing voters with astonishing precision, and tailor outreach in ways that were previously unimaginable. This shift has opened new frontiers in political data analytics and campaign optimization, but it also raises important ethical and legal questions.

Who controls the data? How transparent are the algorithms? Are we enhancing democracy or undermining it? This article takes a comprehensive look at the ways AI in political campaigns is being deployed around the world, spotlighting both the opportunities and the potential dangers. Microtargeting is the practice of using data to segment voters into highly specific groups based on demographics, behaviors, preferences, and psychographics. With AI, this becomes an extraordinarily powerful tool.

Machine learning algorithms can analyze massive amounts of data to detect patterns and correlations that human analysts could easily miss. For example, AI systems might identify that suburban mothers in their 30s who shop for organic food are more receptive to messages about health care reform. Similarly, rural voters concerned about unemployment might respond best to messages focused on job creation. A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points...

The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science.

In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay.

Yet, these viral stories only show one side of the picture. Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding... Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it.

Nearly half of all participants of the workshop rated their understanding of AI as low. However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops. Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors. Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind.

With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights... For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes. While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around... Propagandists are pragmatists and innovators.1 Political marketing is a game in which the cutting edge can be the margin between victory and defeat.

Generative Artificial Intelligence (GenAI) features prominently for those in the political marketing space as they add new tools to their strategic kit. However, given generative AI’s novelty, much of the conversation about its use in digital politicking is speculative. Observers are taking stock of the roles generative artificial intelligence is already playing in U.S. politics and the way it may impact highly contested elections in 2024 and in years to come. Amid policymakers’ and the public’s concerns, there is an urgent need for empirical research on how generative AI is used for the purposes of political communication and corresponding efforts to manipulate public opinion. To better understand major trends and common concerns – such as generative AI’s role in the rapid production of disinformation, the enabling of hyper-targeted political messaging, and the misrepresentation of political figures via synthetic...

These interviews were conducted between January and April 2024 with campaign consultants from both major political parties, vendors of political generative AI tools, a political candidate utilizing generative AI for her campaign, a digital... Who is using generative AI in the political space? How are they using generative AI in the political space?

People Also Search

The Use Of Artificial Intelligence In Political Campaigns And Messaging

The use of artificial intelligence in political campaigns and messaging is ramping up. Already in this 2024 presidential race is AI being used to create fake robocalls and news stories and to generate campaign speeches and fundraising emails. The use of AI in political messaging has raised several alarms among experts, as there are currently no federal rules when it comes to using AI generated con...

House Of Representatives. He Currently Leads The Project On Ethics

House of Representatives. He currently leads the Project on Ethics in Political Communication at the GW School of Media and Public Affairs and continues to advise advocates and organizations. Loge is an expert in communications and political strategy. Loge says AI is being used in a number of ways for political campaigns right now and the use of this emerging technology can ultimately undermine pu...

The Basic Rule Of Thumb Is, There Aren’t AI Ethics

The basic rule of thumb is, there aren’t AI ethics that different from everybody else’s ethics. You have a set of ethics. In a campaign, you should aim to persuade and inform, not deceive and divide. That’s true with AI, with mail, with televions, with speeches,” Loge explains. “A lot of the questions we’re asking about AI are the same questions we’ve asked about rhetoric and persuasion for thousa...

Now, Another Wave Of Technology Is Transforming How Voters Learn

Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a...

Meanwhile, Researchers And Firms Are Exploring The Use Of AI

Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers ...