Ai Elections Navigating The Future Of Democracy In 2024

Bonisiwe Shabane
-
ai elections navigating the future of democracy in 2024

AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.

Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Last week’s leak of the U.S. Department of Education’s proposed “Compact for Academic Excellence in Higher Education” drew intense reactions across academia. Critics call it government overreach threatening free expression, while supporters see a chance for reform and renewed trust between universities and policymakers. Danielle Allen, James Bryant Conant University Professor at Harvard University, director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation, weighs in.

Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. As we approach the 2024 elections, the influence of Artificial Intelligence (AI) on democratic processes is a topic of increasing concern.

This discussion brings together leading experts to analyze the multifaceted roles of AI in shaping elections, from the spread of misinformation to the potential for enhancing voter engagement. Join us as we delve into the ethical, legal, and technological challenges posed by AI, and explore strategies for safeguarding the integrity of our elections. AI's dual role: AI can both enhance and undermine democratic processes, posing new challenges to election integrity. The misinformation threat: AI can generate highly realistic fake content, making it harder to distinguish truth from falsehood. Algorithmic bias: AI algorithms can perpetuate existing biases, leading to unequal or unfair outcomes in elections. Voter manipulation: AI-driven microtargeting can be used to manipulate voters with personalized disinformation.

There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1,2,3,4,5,6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversation with an AI model that advocated... We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements7,8,9. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics. Examining the persuasion strategies9 used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques.

Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription Receive 51 print issues and online access

The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined.

As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole.

And we don’t yet know the extent of that influence. Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay. Yet, these viral stories only show one side of the picture. Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems.

The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding... Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it. Nearly half of all participants of the workshop rated their understanding of AI as low. However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops.

Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors. Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind. With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights... For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration.

Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes. While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around... A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began.

Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections.

“One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says.

People Also Search

AI Chatbots Are Shockingly Good At Political Persuasion Chatbots Can

AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.

Forget Door Knocks And Phone Banks—chatbots Could Be The Future

Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Last week’s leak of the U.S. Department of Education’s proposed “Compact for Aca...

Amid Rising Illiberalism, Danielle Allen Urges A New Agenda To

Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action...

This Discussion Brings Together Leading Experts To Analyze The Multifaceted

This discussion brings together leading experts to analyze the multifaceted roles of AI in shaping elections, from the spread of misinformation to the potential for enhancing voter engagement. Join us as we delve into the ethical, legal, and technological challenges posed by AI, and explore strategies for safeguarding the integrity of our elections. AI's dual role: AI can both enhance and undermin...

There Is Great Public Concern About The Potential Use Of

There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1,2,3,4,5,6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal...