Ai In Politics How Machine Learning Shapes Public Opinion
Artificial intelligence is reshaping politics in ways we never imagined. From targeted ads to deepfake propaganda, machine learning algorithms are shaping public opinion more than ever. But how does this work, and what are the consequences? In this deep dive, we’ll explore the key ways AI is being used to influence voters, the ethical concerns surrounding its use, and what the future holds for democracy in the age of AI-driven... Gone are the days of one-size-fits-all political ads. AI-driven microtargeting allows campaigns to analyze vast amounts of data on voters’ interests, demographics, and behaviors to create hyper-personalized messaging.
For example, a swing voter in Ohio might see a Facebook ad emphasizing economic policies, while a young voter in California could receive content about climate change. The goal? To influence each person based on their specific concerns. AI doesn’t just create ads—it tests them in real-time. Machine learning algorithms analyze which wording, visuals, and emotional appeals perform best and adjust accordingly. This means campaigns can optimize persuasion strategies continuously.
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions. “LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers.
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed... They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S.
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin.
Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information. The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Meta and... The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their... The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others.
“Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, said in a statement about the study. The study is part of a growing body of research into how AI could affect politics and democracy, and it comes as politicians, foreign governments and others are trying to figure out how they... A massive study of political persuasion shows AIs have, at best, a weak effect. Roughly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion well before achieving general intelligence—a prediction that raised concerns about the influence AI could have over democratic elections. To see if conversational large language models can really sway political views of the public, scientists at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and many other institutions performed by far the... It turned out political AI chatbots fell far short of superhuman persuasiveness, but the study raises some more nuanced issues about our interactions with AI.
The public debate about the impact AI has on politics has largely revolved around notions drawn from dystopian sci-fi. Large language models have access to essentially every fact and story ever published about any issue or candidate. They have processed information from books on psychology, negotiations, and human manipulation. They can rely on absurdly high computing power in huge data centers worldwide. On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal. Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you.
When viewed this way, LLMs can indeed appear kind of scary. The goal of this new gargantuan AI persuasiveness study was to break such scary visions down into their constituent pieces and see if they actually hold water. AI is eminently capable of political persuasion and could automate it at a mass scale. We are not prepared. In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary.
It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence. Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease. AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason.
But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. In two large peer-reviewed studies, AI chatbots shifted voters’ views by a substantial margin, far more than traditional political advertising tends to do. In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply.
Co-hosts Archon Fung and Stephen Richer look back at the last five months of headlines as they celebrate the twentieth episode of Terms of Engagement. Archon Fung and Stephen Richer are joined by Michelle Feldman, political director at Mobile Voting, a nonprofit, nonpartisan initiative working to make voting easier with expanded access to mobile voting. Archon Fung and Stephen Richer discuss whether fusion voting expands representation and strengthens smaller parties—or whether it muddies party lines and confuses voters. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement.
Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all. The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21.
A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me. Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both... The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented.
“If you could do that at scale,” Rand said, “it would really change the outcome of elections.” The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or... Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that... In fact, the most persuasive chatbots were also the least accurate. Independent experts told me that Rand’s two studies join a growing body of research indicating that generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw...
Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with... Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me. AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use.
Julia Bayuk, professor of marketing in Lerner College’s Department of Business Administration and Associate Dean for Undergraduate Programs, was recently named to the Poets&Quants 50 Best Undergraduate Business Professors list. Over her 17 years at UD, Bayuk has... UD’s Alfred Lerner College of Business and Economics continues to aspire to delivering inspirational education and pioneering scholarship and building inclusive communities that beneficially transform business and society. You can help Lerner fulfill its mission this... When University of Delaware alumna Maya Nazareth secured a $300,000 investment on Shark Tank this year for her company Alchemize Fightwear, she became the latest example of a Blue Hen turning an idea into... Her rapid growth in the combat-sports apparel...
People Also Search
- AI in Politics: How Machine Learning Shapes Public Opinion
- AI chatbots can effectively sway voters - in either direction
- AI Chatbots Are Shockingly Good at Political Persuasion
- AI chatbots used inaccurate information to change political opinions: Study
- Researchers find what makes AI chatbots politically persuasive
- Hidden Influence: How AI Quietly Shapes Modern Politics
- The era of AI persuasion in elections is about to begin
- AI on the Ballot: How Artificial Intelligence Is Already Changing Politics
- Chatbots are surprisingly effective at swaying voters
- The Role of Artificial Intelligence in Political Polling
Artificial Intelligence Is Reshaping Politics In Ways We Never Imagined.
Artificial intelligence is reshaping politics in ways we never imagined. From targeted ads to deepfake propaganda, machine learning algorithms are shaping public opinion more than ever. But how does this work, and what are the consequences? In this deep dive, we’ll explore the key ways AI is being used to influence voters, the ethical concerns surrounding its use, and what the future holds for dem...
For Example, A Swing Voter In Ohio Might See A
For example, a swing voter in Ohio might see a Facebook ad emphasizing economic policies, while a young voter in California could receive content about climate change. The goal? To influence each person based on their specific concerns. AI doesn’t just create ads—it tests them in real-time. Machine learning algorithms analyze which wording, visuals, and emotional appeals perform best and adjust ac...
A Short Interaction With A Chatbot Can Meaningfully Shift A
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs)...
“But Those Claims Aren’t Necessarily Accurate – And Even Arguments
“But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature...
Presidential Election, The 2025 Canadian Federal Election And The 2025
presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting o...