Ai Chatbots Can Effectively Sway Voters In Either Direction
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers.
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S.
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win.
But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. “One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study.
LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin.
Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. Chatbots have the potential to sway democratic elections — and the most persuasive methods tend to introduce factual inaccuracies.Credit: Marcus Harrison/Alamy Artificial-intelligence chatbots can influence voters in major elections — and have a bigger effect on people’s political views than conventional campaigning and advertising. A study published today in Nature1 found that participants’ preferences in real-world elections swung by up to 15 percentage points after conversing with a chatbot. In a related paper published in Science2, researchers showed that these chatbots’ effectiveness stems from their ability to synthesize a lot of information in a conversational way.
AI is more persuasive than people in online debates The findings showcase the persuasive power of chatbots, which are used by more than one hundred million users each day, says David Rand, an author of both studies and a cognitive scientist at Cornell... AIs are equally persuasive when they’re telling the truth or lying People conversing with chatbots about politics find those that dole out facts more persuasive than other bots, such as those that tell good stories. But these informative bots are also prone to lying. Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading.
Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa. The most persuasive bots don’t need to tell the best story or cater to a person’s individual beliefs, researchers report in a related paper in Science. Instead, they simply dole out the most information. But those bloviating bots also dole out the most misinformation. It was September 2024, and an undecided voter was explaining to an AI chatbot why they were leaning toward supporting Kamala Harris over Donald Trump in the upcoming presidential election.
“I don’t know much about Harris,” the voter admitted. “... However, with Trump, he is associated with a lot of bad things. So, I do not feel he is trustworthy right now.” In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me. Rand didn’t stop with the U.S.
general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both... The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.” The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or...
Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that... In fact, the most persuasive chatbots were also the least accurate. Independent experts told me that Rand’s two studies join a growing body of research indicating that generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw... Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with...
Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use. AI chatbots can effectively sway voters – in either direction FacebookXLinkedInWeChatBlueskyMessageWhatsAppEmail ITHACA, N.Y.
-- A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell University research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand, professor of information science and marketing and... “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.”
People Also Search
- AI chatbots can effectively sway voters - in either direction
- AI chatbots can sway voters better than political advertisements
- Chatbots Can Meaningfully Shift Political Opinions, Studies Find
- AI Chatbots Shown to Sway Voters, Raising New Fears about Election ...
- AI chatbots can sway voters with remarkable ease — is it ... - Nature
- Chatbots spewing facts, and falsehoods, can sway voters
- AI chatbots do better than TV ads at changing voter views, studies show ...
- Chatbots are surprisingly effective at swaying voters
- Study finds AI Chatbots can Effectively Sway Voters
- AI chatbots can effectively sway voters—in either direction
A Short Interaction With A Chatbot Can Meaningfully Shift A
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs)...
“But Those Claims Aren’t Necessarily Accurate – And Even Arguments
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature...
Presidential Election, The 2025 Canadian Federal Election And The 2025
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. ...
But Maybe Those Calls Helped Her Cause: New Research Reveals
But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the op...
LLMs Can Persuade People More Effectively Than Political Advertisements Because
LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard ...