Researchers Find What Makes Ai Chatbots Politically Persuasive

Bonisiwe Shabane
-
researchers find what makes ai chatbots politically persuasive

A massive study of political persuasion shows AIs have, at best, a weak effect. Roughly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion well before achieving general intelligence—a prediction that raised concerns about the influence AI could have over democratic elections. To see if conversational large language models can really sway political views of the public, scientists at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and many other institutions performed by far the... It turned out political AI chatbots fell far short of superhuman persuasiveness, but the study raises some more nuanced issues about our interactions with AI. The public debate about the impact AI has on politics has largely revolved around notions drawn from dystopian sci-fi. Large language models have access to essentially every fact and story ever published about any issue or candidate.

They have processed information from books on psychology, negotiations, and human manipulation. They can rely on absurdly high computing power in huge data centers worldwide. On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal. Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you. When viewed this way, LLMs can indeed appear kind of scary. The goal of this new gargantuan AI persuasiveness study was to break such scary visions down into their constituent pieces and see if they actually hold water.

AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.

Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. American Association for the Advancement of Science (AAAS) Even small, open-source AI chatbots can be effective political persuaders, according to a new study. The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI. There is a growing concern amongst many that advances in AI – particularly conversational large language models (LLMs) – may soon give machines significant persuasive power over human beliefs at unprecedented scale.

However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown. To explore these risks, Kobi Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and... Hackenburg et al. conducted three large-scale survey experiments involving nearly 77,000 participants who conversed with 19 different LLM models – ranging from small open-source systems to state-of-the-art “frontier” models – on hundreds of political issues. They also tested multiple prompting strategies, several post-training methods, and assessed how each “lever” affected persuasive impact and factual accuracy. According to the findings, model size and personalization (providing the LLM with information about the user) produced small, but measurable effects on persuasion.

Post-training techniques and simple prompting strategies, on the other hand, increased persuasiveness dramatically, by as much as 51% and 27%, respectively. Once post-trained, even small, open-source models could rival large frontier models in shifting political attitudes. Hackenburg et al. found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimizing an AI model for influence...

In a Perspective, Lisa Argyle discusses this study and its companion study, published in Nature, in greater detail. Special note / related paper in Nature: A paper with overlapping authors and on related themes, “Persuading voters using human–artificial intelligence dialogues” will be published in Nature on the same day and time (and... U.S. Eastern Time on Thursday, 4 December. For the related paper, please refer to the Nature Press Site: http://press.nature.com or contact the Nature Press Office team at press@nature.com. The levers of political persuasion with conversational artificial intelligence

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system. A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points...

The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science.

In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them.

“Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things.

The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. “One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. Chiara Vargiu is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. Alessandro Nai is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands.

The world is getting used to ‘talking’ to machines. Technology that just months ago seemed improbable or marginal has erupted quickly into the everyday lives of millions, perhaps billions, of people. Generative conversational artificial-intelligence systems, such as OpenAI’s ChatGPT, are being used to optimize tasks, plan holidays and seek advice on matters ranging from the trivial to the existential — a quiet exchange of words... Against this backdrop, the urgent question is: can the same conversational skills that make AI into helpful assistants also turn them into powerful political actors? In a pair of studies2,3 in Nature and Science, researchers show that dialogues with large language models (LLMs) can shift people’s attitudes towards political candidates and policy issues. The researchers also identify which features of conversational AI systems make them persuasive, and what risks they might pose for democracy.

Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription Two separate studies published today in the journals Science and Nature arrived at the same conclusion: Artificial intelligence chatbots can be very persuasive at molding a person’s political opinion, with researchers pointing to the... In the Nature study, researchers explained how they programmed various chatbots to advocate for specific political candidates in the 2024 U.S. presidential election and the 2025 national elections in Canada and Poland. They discovered that though the chatbots could further entrench people regarding their preferred political candidate, they were also sometimes successful in swaying voters to change their minds or influencing undecided voters.

For the U.S. study, 2,306 participants stated their preference for either Donald Trump or Kamala Harris, after which they were randomly assigned a chatbot that advocated for one of the two. The same setup was run in Canada, with the chatbots backing either Liberal Party leader Mark Carney or the Conservative Party leader Pierre Poilievre. In Poland, it was between the Civic Coalition’s candidate Rafał Trzaskowski and the Law and Justice party’s candidate Karol Nawrocki. For each experiment, the bot’s primary objectives were to increase support for the candidate the bot was backing or decrease support for the candidate if the participant preferred the other politician. The bots had to be “positive, respectful and fact-based; to use compelling arguments and analogies to illustrate its points and connect with its partner; to address concerns and counter arguments in a thoughtful manner...

The upshot was that the AI could, at times, sway the person to think differently, although mostly when presenting fact-based arguments and evidence, not when appealing to the participant’s sense of right and wrong. This was where the researchers became concerned, since the bots weren’t always able to present factual information. Though the bots were tasked to persuade, in a real-life scenario, bias could be programmed into the AI. Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information. The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Meta and... The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their...

The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others. “Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, said in a statement about the study. The study is part of a growing body of research into how AI could affect politics and democracy, and it comes as politicians, foreign governments and others are trying to figure out how they...

People Also Search

A Massive Study Of Political Persuasion Shows AIs Have, At

A massive study of political persuasion shows AIs have, at best, a weak effect. Roughly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion well before achieving general intelligence—a prediction that raised concerns about the influence AI could have over democratic elections. To see if conversational large language models can really sway political views of ...

They Have Processed Information From Books On Psychology, Negotiations, And

They have processed information from books on psychology, negotiations, and human manipulation. They can rely on absurdly high computing power in huge data centers worldwide. On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal. Talking to a powerful AI system is basically interacting wit...

AI Chatbots Are Shockingly Good At Political Persuasion Chatbots Can

AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.

Forget Door Knocks And Phone Banks—chatbots Could Be The Future

Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. American Association for the Advancement of Science (AAAS) Even small, open-source AI chatbots can be effective political persuaders, according to a new study. The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompti...

However, Just How Persuasive These Systems Truly Are, And The

However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown. To explore these risks, Kobi Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and... Hackenburg et al...