Candidates Bots And Ballots How Ai Is Rewriting Political Podtail

Bonisiwe Shabane
-
candidates bots and ballots how ai is rewriting political podtail

How should we address the governance gap between central banks controlling money and the oversight of cryptocurrency? How can decentralized crypto networks and centralized monetary authorities collaborate? And what’s next for digital finance? To explore these questions, Shane Tews is joined by Milton Mueller, Karim Farhat, and Vagisha Srivastava from the Jimmy and Rosalynn Carter School of Public Policy at Georgia Tech. Mueller is the cofounder and director of the Internet Governance Project at Georgia Tech, where he specializes in the political economy of the internet. Farhat is the assistant director of the Internet Governance Project, focusing primarily on the digital economy and cybersecurity.

Srivastava is a PhD student working on internet fragmentation. They are also joined by Nicoletta Kolpakov, director of the Cirrus Institute. This group’s extensive knowledge makes for an engaging and informative episode. Section 1033 of the Dodd-Frank Act is the foundation of open banking in the United States—giving individuals the right to access and share their own financial data with services of their choice. This rule seeks to increase consumer control, encourage competition, and make it easier to switch providers or use financial management tools. However, the Consumer Financial Protection Bureau—the agency responsible for implementing this provision—is now reconsidering how (or whether) it should be enforced.

In today’s discussion, we explore why Section 1033 has become a key focus of rulemaking and how changes to open banking policies could shift the balance of power between consumers, financial institutions, and emerging... To look into this, Shane Tews spoke with Penny Lee, president and CEO of the Financial Technology Association. Penny is also the cofounder of K Street Capital—an angel investment group in Washington, DC—and served as a senior advisor for former US Senate Majority Leader Harry Reid. She brings more than two decades of experience in the private and public sectors, making for an informative conversation. Bluesky Social is a social media app that was originally launched in 2019 on Twitter, before becoming an independent company in 2021. Bluesky’s mission is to offer a decentralized experience for users—where algorithms are not imposed on them, but they can choose their content preferences.

The platform also highlights the importance of portability, enabling users to carry their social media ecosystems across different platforms. But what are the technical and social challenges to making true platform portability a reality? The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy.

LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see.

Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence. In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all. The bots were effective.

After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me. Rand didn’t stop with the U.S. general election.

He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both... The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.” The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or... Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that...

In fact, the most persuasive chatbots were also the least accurate. Independent experts told me that Rand’s two studies join a growing body of research indicating that generative-AI models are, indeed, capable persuaders: These bots are patient, designed to be perceived as helpful, can draw... Granted, caveats exist. It’s unclear how many people would ever have such direct, information-dense conversations with chatbots about whom they’re voting for, especially when they’re not being paid to participate in a study. The studies didn’t test chatbots against more forceful types of persuasion, such as a pamphlet or a human canvasser, Jordan Boyd-Graber, an AI researcher at the University of Maryland who was not involved with... Traditional campaign outreach (mail, phone calls, television ads, and so on) is typically not effective at swaying voters, Jennifer Pan, a political scientist at Stanford who was not involved with the research, told me.

AI could very well be different—the new research suggests that the AI bots were more persuasive than traditional ads in previous U.S. presidential elections—but Pan cautioned that it’s too early to say whether a chatbot with a clear link to a candidate would be of much use. A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy in either direction, new Cornell research finds. The potential for artificial intelligence to affect election results is a major public concern. Two new papers – with experiments conducted in four countries – demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters’ preferences by 10 percentage points... The LLMs’ persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates’ policy positions.

“LLMs can really move people’s attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side,” said David Rand ’04, professor in the Cornell Ann S. Bowers College of Computing and Information Science, the Cornell SC Johnson College of Business and the College of Arts and Sciences, and a senior author on both papers. “But those claims aren’t necessarily accurate – and even arguments built on accurate claims can still mislead by omission.” The researchers reported these findings Dec. 4 in two papers published simultaneously, “Persuading Voters Using Human-Artificial Intelligence Dialogues,” in Nature, and “The Levers of Political Persuasion with Conversational Artificial Intelligence,” in Science. In the Nature study, Rand, along with co-senior author Gordon Pennycook, associate professor of psychology and the Dorothy and Ariz Mehta Faculty Leadership Fellow in the College of Arts and Sciences, and colleagues, instructed...

They randomly assigned participants to engage in a back-and-forth text conversation with a chatbot promoting one side or the other and then measured any change in the participants’ opinions and voting intentions. The researchers repeated this experiment three times: in the 2024 U.S. presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election. AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections

By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. GenAI is rewriting the rules of electioneering, turning campaigns into hyper-targeted, multilingual persuasion machines that blur the line between outreach and manipulation The integration of Generative Artificial Intelligence (AI) into election campaigns has redefined the very architecture of political communication and persuasion.

As AI becomes deeply intertwined with the campaign process, its role extends beyond strategy to shaping voter perceptions. It is introducing unprecedented precision, scale, and personalisation in how campaigns engage with voters and influence public opinion across digital platforms. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics. As detailed by Florian Foos (2024), Gen AI offers significant opportunities to reduce costs in modern campaigns by assisting with the drafting of campaign communications, such as emails and text messages. A primary use case for this transformation is the capacity of multilingual AI systems to facilitate direct, dynamic exchanges with voters across linguistic and cultural boundaries. The Bhashini initiative, first introduced in India on 18 December 2023, is a prime example of this use case.

Prime Minister (PM) Narendra Modi used this tool on this date during his address at Kashi Tamil Sangamam in Varanasi to translate his speech to Tamil live. With the integration of AI-driven communication tools, campaigns can now fundamentally alter conventional interaction paradigms, moving from broad mass messaging toward more personal, innovative, and highly targeted forms of digital outreach. The emergence of Gen AI has heralded unprecedented changes in campaign-to-voter communication within contemporary electoral politics. The disruptive potential of AI in this domain is significantly amplified when campaigners can access individual-level personal contact data. AI-powered messaging tools can generate and deliver personalised content at scale, raising the possibility of both positive engagement and concerning intrusions into voter privacy. Notable examples from recent electoral practice include the widespread use of AI-generated fundraising emails in United States campaigns, as well as the deployment of AI-generated videos of political candidates in India making highly tailored...

These instances underscore the increasing prevalence and sophistication of dynamic, digital conversations between campaigns and their target electorate. Generative AI poses new challenges for political campaigning and our democracy as we head towards the 2024 presidential election. While this technology could streamline political messaging, there is greater fear that it could enable widespread manipulation and distortion of the democratic process. Heading into a contentious election, how can we assess and mitigate harms from AI-generated disinformation? How will the use of generative AI be different than prior “cheap fake” attempts? How should policymakers prepare for and respond to the use of AI in political advertising?

On this episode, Shane is joined by Scott Brennen and Matt Perault, co-authors of “The new political ad machine: Policy frameworks for political ads in an age of AI.” They discuss how generative AI... As AI makes creating convincing fakes easier than ever, democracy faces a crisis of truth that threatens informed citizenship. Politico's Bots and Ballots series explores how this technology is already reshaping the political landscape. As generative AI becomes increasingly entrenched in our everyday lives, the line between real and fake grows ever blurrier. The steady erosion of trust has wide-ranging consequences, but its impact on politics is especially troubling. Democracy depends on informed citizens making sound judgments about the parties and policies they support.

People Also Search

How Should We Address The Governance Gap Between Central Banks

How should we address the governance gap between central banks controlling money and the oversight of cryptocurrency? How can decentralized crypto networks and centralized monetary authorities collaborate? And what’s next for digital finance? To explore these questions, Shane Tews is joined by Milton Mueller, Karim Farhat, and Vagisha Srivastava from the Jimmy and Rosalynn Carter School of Public ...

Srivastava Is A PhD Student Working On Internet Fragmentation. They

Srivastava is a PhD student working on internet fragmentation. They are also joined by Nicoletta Kolpakov, director of the Cirrus Institute. This group’s extensive knowledge makes for an engaging and informative episode. Section 1033 of the Dodd-Frank Act is the foundation of open banking in the United States—giving individuals the right to access and share their own financial data with services o...

In Today’s Discussion, We Explore Why Section 1033 Has Become

In today’s discussion, we explore why Section 1033 has become a key focus of rulemaking and how changes to open banking policies could shift the balance of power between consumers, financial institutions, and emerging... To look into this, Shane Tews spoke with Penny Lee, president and CEO of the Financial Technology Association. Penny is also the cofounder of K Street Capital—an angel investment ...

The Platform Also Highlights The Importance Of Portability, Enabling Users

The platform also highlights the importance of portability, enabling users to carry their social media ecosystems across different platforms. But what are the technical and social challenges to making true platform portability a reality? The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into e...

LLMs Are Being Adopted At A Pace That Makes Social

LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results ...