Ai Chatbots Used Inaccurate Information To Change People S Political
Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information. The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Meta and... The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their... The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others. “Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, said in a statement about the study. The study is part of a growing body of research into how AI could affect politics and democracy, and it comes as politicians, foreign governments and others are trying to figure out how they...
AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.
Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win.
But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. “One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study.
LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. Chiara Vargiu is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. Alessandro Nai is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. The world is getting used to ‘talking’ to machines. Technology that just months ago seemed improbable or marginal has erupted quickly into the everyday lives of millions, perhaps billions, of people. Generative conversational artificial-intelligence systems, such as OpenAI’s ChatGPT, are being used to optimize tasks, plan holidays and seek advice on matters ranging from the trivial to the existential — a quiet exchange of words...
Against this backdrop, the urgent question is: can the same conversational skills that make AI into helpful assistants also turn them into powerful political actors? In a pair of studies2,3 in Nature and Science, researchers show that dialogues with large language models (LLMs) can shift people’s attitudes towards political candidates and policy issues. The researchers also identify which features of conversational AI systems make them persuasive, and what risks they might pose for democracy. Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription AIs are equally persuasive when they’re telling the truth or lying
People conversing with chatbots about politics find those that dole out facts more persuasive than other bots, such as those that tell good stories. But these informative bots are also prone to lying. Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading. Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa. The most persuasive bots don’t need to tell the best story or cater to a person’s individual beliefs, researchers report in a related paper in Science.
Instead, they simply dole out the most information. But those bloviating bots also dole out the most misinformation. Engineering | News releases | Research | Technology University of Washington researchers recruited self-identifying Democrats and Republicans to make political decisions with help from three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both likelier to lean in the direction of the biased chatbot they were talking with than those who interacted with the base model. Here, a Democrat interacts with the conservative model.Fisher et al./ACL ‘25
If you’ve interacted with an artificial intelligence chatbot, you’ve likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system’s biases can affect users is less clear. So a University of Washington study put it to the test. A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities.
For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. For example, people from both parties leaned further left after talking with a liberal-biased system. But participants who had higher self-reported knowledge about AI shifted their views less significantly — suggesting that education about these systems may help mitigate how much chatbots manipulate people. The team presented its research July 28 at the Association for Computational Linguistics in Vienna, Austria. A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds.
The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.”
The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election.
People Also Search
- AI chatbots used inaccurate information to change people's political ...
- Chatbots Can Meaningfully Shift Political Opinions, Studies Find
- AI Chatbots Are Shockingly Good at Political Persuasion
- Chatbots produce incorrect information to sway political opinions ...
- AI chatbots can sway voters better than political advertisements
- AI chatbots can persuade voters to change their minds - Nature
- Chatbots spew facts and falsehoods to sway voters - Science News
- AI Chatbots Proven to Sway Voter Opinions and Undermine Democratic ...
- With just a few messages, biased AI chatbots swayed people's political ...
- AI chatbots can effectively sway voters - in either direction
Artificial Intelligence Chatbots Are Very Good At Changing Peoples’ Political
Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information. The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Me...
AI Chatbots Are Shockingly Good At Political Persuasion Chatbots Can
AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.
Forget Door Knocks And Phone Banks—chatbots Could Be The Future
Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “H...
But Maybe Those Calls Helped Her Cause: New Research Reveals
But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the op...
LLMs Can Persuade People More Effectively Than Political Advertisements Because
LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. Chiara Vargiu is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. Alessandro Nai is in the Amsterdam School of Communication Research (ASCoR), Un...