More Than Ai Misinformation Us Voters Worry About Lying Politicians
As a bitterly contested US election campaign enters its final stretch, misinformation researchers have raised the alarm over threats posed by AI and foreign influence -- but voters appear more concerned about falsehoods from... The United States is battling a firehose of misinformation before the November 5 vote -- from fake "news" sites that researchers say were created by Russian and Iranian actors, to manipulated images generated by... More concerning for voters, however, is misinformation spreading the good old-fashioned way, through politicians sowing falsehoods, with researchers saying they face almost no legal consequences for distorting the truth. "I think when we do a post-mortem on 2024 the most viral misinformation will have either emanated from politicians or will have been amplified by politicians," Joshua Tucker, co-director of the New York University... In a survey published last week by Axios, 51 percent of Americans identified politicians spreading falsehoods as their top concern regarding misinformation. AIs are equally persuasive when they’re telling the truth or lying
People conversing with chatbots about politics find those that dole out facts more persuasive than other bots, such as those that tell good stories. But these informative bots are also prone to lying. Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading. Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa. The most persuasive bots don’t need to tell the best story or cater to a person’s individual beliefs, researchers report in a related paper in Science.
Instead, they simply dole out the most information. But those bloviating bots also dole out the most misinformation. A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began.
Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections.
“One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. Chatbots have the potential to sway democratic elections — and the most persuasive methods tend to introduce factual inaccuracies.Credit: Marcus Harrison/Alamy Artificial-intelligence chatbots can influence voters in major elections — and have a bigger effect on people’s political views than conventional campaigning and advertising. A study published today in Nature1 found that participants’ preferences in real-world elections swung by up to 15 percentage points after conversing with a chatbot. In a related paper published in Science2, researchers showed that these chatbots’ effectiveness stems from their ability to synthesize a lot of information in a conversational way.
AI is more persuasive than people in online debates The findings showcase the persuasive power of chatbots, which are used by more than one hundred million users each day, says David Rand, an author of both studies and a cognitive scientist at Cornell... AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows. The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron
Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. Washington (AFP) – As a bitterly contested US election campaign enters its final stretch, misinformation researchers have raised the alarm over threats posed by AI and foreign influence -- but voters appear more concerned... The United States is battling a firehose of misinformation before the November 5 vote -- from fake "news" sites that researchers say were created by Russian and Iranian actors, to manipulated images generated by... More concerning for voters, however, is misinformation spreading the good old-fashioned way, through politicians sowing falsehoods, with researchers saying they face almost no legal consequences for distorting the truth.
"I think when we do a post-mortem on 2024 the most viral misinformation will have either emanated from politicians or will have been amplified by politicians," Joshua Tucker, co-director of the New York University... In a survey published last week by Axios, 51 percent of Americans identified politicians spreading falsehoods as their top concern regarding misinformation. We surveyed 1,000 U.S. adults to understand concerns about the use of artificial intelligence (AI) during the 2024 U.S. presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation.
Our findings suggest that direct interactions with AI tools like ChatGPT and DALL-E were not correlated with these concerns, regardless of education or STEM work experience. Instead, news consumption, particularly through television, appeared more closely linked to heightened concerns. These results point to the potential influence of news media and the importance of exploring AI literacy and balanced reporting. Stanford Social Media Lab, Stanford University, USA Department of Political Science, Northeastern University, USA Network Science Institute, Northeastern University, USA
School of Journalism, Northeastern University, USA Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information. The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Meta and... The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their... The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others. “Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, said in a statement about the study.
The study is part of a growing body of research into how AI could affect politics and democracy, and it comes as politicians, foreign governments and others are trying to figure out how they... It was September 2024, and an undecided voter was explaining to an AI chatbot why they were leaning toward supporting Kamala Harris over Donald Trump in the upcoming presidential election. “I don’t know much about Harris,” the voter admitted. “... However, with Trump, he is associated with a lot of bad things. So, I do not feel he is trustworthy right now.”
People Also Search
- More Than AI Misinformation, US Voters Worry About Lying Politicians
- Chatbots spew facts and falsehoods to sway voters - Science News
- Lying in politics: The real issue, not AI
- AI chatbots can sway voters better than political advertisements
- AI chatbots can sway voters with remarkable ease — is it time to worry?
- AI Chatbots Shown to Sway Voters, Raising New Fears about Election ...
- More than AI misinformation, US voters worry about lying politicians - RFI
- The origin of public concerns over AI supercharging misinformation in ...
- AI chatbots used inaccurate information to change people's political ...
- AI chatbots do better than TV ads at changing voter views, studies show ...
As A Bitterly Contested US Election Campaign Enters Its Final
As a bitterly contested US election campaign enters its final stretch, misinformation researchers have raised the alarm over threats posed by AI and foreign influence -- but voters appear more concerned about falsehoods from... The United States is battling a firehose of misinformation before the November 5 vote -- from fake "news" sites that researchers say were created by Russian and Iranian act...
People Conversing With Chatbots About Politics Find Those That Dole
People conversing with chatbots about politics find those that dole out facts more persuasive than other bots, such as those that tell good stories. But these informative bots are also prone to lying. Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading. Briefly chatting with an AI moved potential voters in three countries toward their less preferred candida...
Instead, They Simply Dole Out The Most Information. But Those
Instead, they simply dole out the most information. But those bloviating bots also dole out the most misinformation. A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversat...
Daniels Didn’t Ultimately Win. But Maybe Those Calls Helped Her
Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support pr...
“One Conversation With An LLM Has A Pretty Meaningful Effect
“One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says. Chatbots have the potential to sw...