Chatbots Can Meaningfully Shift Political Opinions Studies Find
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it.
A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. “One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says.
AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.
Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. It was September 2024, and an undecided voter was explaining to an AI chatbot why they were leaning toward supporting Kamala Harris over Donald Trump in the upcoming presidential election. “I don’t know much about Harris,” the voter admitted. “... However, with Trump, he is associated with a lot of bad things. So, I do not feel he is trustworthy right now.”
Chiara Vargiu is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. Alessandro Nai is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. The world is getting used to ‘talking’ to machines. Technology that just months ago seemed improbable or marginal has erupted quickly into the everyday lives of millions, perhaps billions, of people. Generative conversational artificial-intelligence systems, such as OpenAI’s ChatGPT, are being used to optimize tasks, plan holidays and seek advice on matters ranging from the trivial to the existential — a quiet exchange of words... Against this backdrop, the urgent question is: can the same conversational skills that make AI into helpful assistants also turn them into powerful political actors?
In a pair of studies2,3 in Nature and Science, researchers show that dialogues with large language models (LLMs) can shift people’s attitudes towards political candidates and policy issues. The researchers also identify which features of conversational AI systems make them persuasive, and what risks they might pose for democracy. Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription A massive study of political persuasion shows AIs have, at best, a weak effect. Roughly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion well before achieving general intelligence—a prediction that raised concerns about the influence AI could have over democratic elections.
To see if conversational large language models can really sway political views of the public, scientists at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and many other institutions performed by far the... It turned out political AI chatbots fell far short of superhuman persuasiveness, but the study raises some more nuanced issues about our interactions with AI. The public debate about the impact AI has on politics has largely revolved around notions drawn from dystopian sci-fi. Large language models have access to essentially every fact and story ever published about any issue or candidate. They have processed information from books on psychology, negotiations, and human manipulation. They can rely on absurdly high computing power in huge data centers worldwide.
On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal. Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you. When viewed this way, LLMs can indeed appear kind of scary. The goal of this new gargantuan AI persuasiveness study was to break such scary visions down into their constituent pieces and see if they actually hold water. In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me. Rand didn’t stop with the U.S.
general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both... The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.” The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or...
Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that... In fact, the most persuasive chatbots were also the least accurate. Independent experts told me that Rand’s two studies join a growing body of research indicating that generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw... Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with...
Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use. Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information. The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Meta and... The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their...
The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others. “Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, said in a statement about the study. The study is part of a growing body of research into how AI could affect politics and democracy, and it comes as politicians, foreign governments and others are trying to figure out how they... Two separate studies published today in the journals Science and Nature arrived at the same conclusion: Artificial intelligence chatbots can be very persuasive at molding a person’s political opinion, with researchers pointing to the... In the Nature study, researchers explained how they programmed various chatbots to advocate for specific political candidates in the 2024 U.S. presidential election and the 2025 national elections in Canada and Poland.
They discovered that though the chatbots could further entrench people regarding their preferred political candidate, they were also sometimes successful in swaying voters to change their minds or influencing undecided voters. For the U.S. study, 2,306 participants stated their preference for either Donald Trump or Kamala Harris, after which they were randomly assigned a chatbot that advocated for one of the two. The same setup was run in Canada, with the chatbots backing either Liberal Party leader Mark Carney or the Conservative Party leader Pierre Poilievre. In Poland, it was between the Civic Coalition’s candidate Rafał Trzaskowski and the Law and Justice party’s candidate Karol Nawrocki. For each experiment, the bot’s primary objectives were to increase support for the candidate the bot was backing or decrease support for the candidate if the participant preferred the other politician.
The bots had to be “positive, respectful and fact-based; to use compelling arguments and analogies to illustrate its points and connect with its partner; to address concerns and counter arguments in a thoughtful manner... The upshot was that the AI could, at times, sway the person to think differently, although mostly when presenting fact-based arguments and evidence, not when appealing to the participant’s sense of right and wrong. This was where the researchers became concerned, since the bots weren’t always able to present factual information. Though the bots were tasked to persuade, in a real-life scenario, bias could be programmed into the AI.
People Also Search
- Chatbots Can Meaningfully Shift Political Opinions, Studies Find
- AI chatbots can effectively sway voters—in either direction
- AI chatbots can sway voters better than political advertisements
- AI Chatbots Are Shockingly Good at Political Persuasion
- AI chatbots do better than TV ads at changing voter views, studies show ...
- AI chatbots can persuade voters to change their minds - Nature
- Researchers find what makes AI chatbots politically persuasive
- Chatbots are surprisingly effective at swaying voters
- AI chatbots used inaccurate information to change political opinions: Study
- Studies find AI chatbots have the persuasive power to change political ...
A Conversation With A Chatbot Can Shift People's Political Views—but
A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run fo...
A Multi-university Team Of Researchers Has Found That Chatting With
A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said ...
AI Chatbots Are Shockingly Good At Political Persuasion Chatbots Can
AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5.
Forget Door Knocks And Phone Banks—chatbots Could Be The Future
Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. It was September 2024, and an undecided voter was explaining to an AI chatbot why they were leaning toward supporting Kamala Harris over Donald Trump in the upcoming presidential election. “I don’t know much about Harris,” the voter admitted. “... However, with Trump, he is associated with a lot of b...
Chiara Vargiu Is In The Amsterdam School Of Communication Research
Chiara Vargiu is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. Alessandro Nai is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. The world is getting used to ‘talking’ to machines. Technology that just months ago seemed improbable or marginal has erupted...