The Levers Of Political Persuasion With Conversational Artificial
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. There are widespread fears that conversational artificial intelligence (AI) could soon exert unprecedented influence over human beliefs.
In this work, in three large-scale experiments (N = 76,977 participants), we deployed 19 large language models (LLMs)-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. We show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods-which boosted persuasiveness by as much as 51 and 27%, respectively-than from personalization or... We further show that these methods increased persuasion by exploiting LLMs' ability to rapidly access and strategically deploy information and that, notably, where they increased AI persuasiveness, they also systematically decreased factual accuracy. American Association for the Advancement of Science (AAAS) Even small, open-source AI chatbots can be effective political persuaders, according to a new study.
The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI. There is a growing concern amongst many that advances in AI – particularly conversational large language models (LLMs) – may soon give machines significant persuasive power over human beliefs at unprecedented scale. However, just how persuasive these systems truly are, and the underlying mechanisms that make them so, remain largely unknown. To explore these risks, Kobi Hackenburg and colleagues investigated three central questions: whether larger and more advanced models are inherently more persuasive; whether smaller models can be made highly persuasive through targeted post-training; and... Hackenburg et al.
conducted three large-scale survey experiments involving nearly 77,000 participants who conversed with 19 different LLM models – ranging from small open-source systems to state-of-the-art “frontier” models – on hundreds of political issues. They also tested multiple prompting strategies, several post-training methods, and assessed how each “lever” affected persuasive impact and factual accuracy. According to the findings, model size and personalization (providing the LLM with information about the user) produced small, but measurable effects on persuasion. Post-training techniques and simple prompting strategies, on the other hand, increased persuasiveness dramatically, by as much as 51% and 27%, respectively. Once post-trained, even small, open-source models could rival large frontier models in shifting political attitudes. Hackenburg et al.
found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimizing an AI model for influence... In a Perspective, Lisa Argyle discusses this study and its companion study, published in Nature, in greater detail. Special note / related paper in Nature: A paper with overlapping authors and on related themes, “Persuading voters using human–artificial intelligence dialogues” will be published in Nature on the same day and time (and... U.S.
Eastern Time on Thursday, 4 December. For the related paper, please refer to the Nature Press Site: http://press.nature.com or contact the Nature Press Office team at press@nature.com. The levers of political persuasion with conversational artificial intelligence Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Today, we published in Science the results of a study carried out with colleagues at the Oxford Internet Institute, the London School of Economics, Stanford University and MIT, examining how conversational AI can shape... Through three large‑scale experiments with over 76,000 participants, we tested the persuasiveness of 19 AI models on more than 700 political issues. Our goal was to understand the levers of persuasion with conversational AI: what makes it effective, and under what conditions. We were interested in answering questions like: Is persuasion mainly driven by model size? Do personalisation and microtargeting matter? Can models be post‑trained to become more persuasive?
Which rhetorical strategies are most effective? Conversational AI systems can now generate detailed, well‑structured arguments instantly and hold interactive discussions that feel tailored and engaging. While this creates opportunities for useful applications, it also raises the possibility that AI could influence what people think and do. Although there is currently little evidence that such systems are being used to maliciously persuade people at scale, that may change as the technology advances. Understanding the mechanisms that make AI persuasive allows us to identify where genuine risks lie, rather than relying on assumptions or speculation. This evidence is essential for designing safeguards and standards that keep people safe.
Across three experiments, participants engaged in back‑and‑forth conversations with one of 19 open‑ and closed‑source language models. In controlled conditions, models were instructed to persuade the participant to agree with one of 707 issue stances, using one of eight different rhetorical strategies. These strategies included information-focused argumentation, storytelling, and moral reframing. Organize your preprints, BibTeX, and PDFs with Paperpile. Enhance arXiv with our new Chrome Extension. Abstract: There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs.
Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods-which boosted persuasiveness by as much as 51% and 27%... We further show that these methods increased persuasion by exploiting LLMs' unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy. Organize your preprints, BibTeX, and PDFs with Paperpile. This paper presents a comprehensive empirical investigation into the determinants of political persuasiveness in conversational AI, leveraging three large-scale experiments (N=76,977) and 19 LLMs across 707 political issues.
The paper systematically interrogates the effects of model scale, post-training, prompting, and personalization on persuasive efficacy, and quantifies the trade-off between persuasiveness and factual accuracy in AI-generated political discourse. A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S.
Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions.
The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. Authors: Kobi Hackenburg, Ben M. Tappin, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand, Christopher Summerfield There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs.
Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods-which boosted persuasiveness by as much as 51% and 27%... We further show that these methods increased persuasion by exploiting LLMs' unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy. Subjects: Computation and Language , Artificial Intelligence , Computers and Society , Human-Computer Interaction #1 The Levers of Political Persuasion with Conversational AI
Github: https://github.com/bojone/papers.cool Kobi is a PhD candidate in Social Data Science and Clarendon Scholar at the OII. Kobi’s doctoral research evaluates the capabilities of artificial intelligence systems to influence human attitudes, behaviour, and cognition. Helen Margetts is Professor of Society and the Internet, a political scientist specialising in digital government and politics. She was Director of the OII from 2011-18. She is a Professorial Fellow of Mansfield College.
People Also Search
- The Levers of Political Persuasion with Conversational AI
- The levers of political persuasion with conversational artificial ...
- Study reveals 'levers' driving the political persuasiveness of AI ...
- How do AI models persuade? Exploring the levers of AI-enabled ...
- Political Persuasion with Conversational AI - emergentmind.com
- AI chatbots can effectively sway voters - in either direction
- The Levers of Political Persuasion with Conversational AI | Cool Papers ...
- Testing theories of political persuasion using AI - PNAS
- Oxford and AISI researchers reveal how conversational AI can change ...
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
In This Work, In Three Large-scale Experiments (N = 76,977
In this work, in three large-scale experiments (N = 76,977 participants), we deployed 19 large language models (LLMs)-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. We show that the persuasive power of current and near-future AI is likely to stem more from post-tra...
The Findings Provide A Comprehensive Empirical Map Of The Mechanisms
The findings provide a comprehensive empirical map of the mechanisms behind AI political persuasion, revealing that post-training and prompting – not model scale and personalization – are the dominant levers. It also reveals evidence of a persuasion-accuracy tradeoff, reshaping how policymakers and researchers should conceptualize the risks of persuasive AI. There is a growing concern amongst many...
Conducted Three Large-scale Survey Experiments Involving Nearly 77,000 Participants Who
conducted three large-scale survey experiments involving nearly 77,000 participants who conversed with 19 different LLM models – ranging from small open-source systems to state-of-the-art “frontier” models – on hundreds of political issues. They also tested multiple prompting strategies, several post-training methods, and assessed how each “lever” affected persuasive impact and factual accuracy. A...
Found AI Systems Are Most Persuasive When They Deliver Information-rich
found AI systems are most persuasive when they deliver information-rich arguments. Roughly half of the variance in persuasion effects across models and methods could be traced to this single factor. However, the authors also discovered a notable tradeoff; models and prompting strategies that were effective in boosting persuasiveness often did so at the expense of truthfulness, showing that optimiz...