Lying In Politics The Real Issue Not Ai

Bonisiwe Shabane
-
lying in politics the real issue not ai

Stephanie Mitchell/Harvard Staff Photographer Founder of PolitiFact discusses case studies from his new book that reveal how we got to where we are now Many Americans feel like the spin and outright lying in politics has gotten worse in recent decades. And that it’s not a good thing. Bill Adair agrees. The founder of PolitiFact, the Pulitzer Prize-winning, fact-checking website, looks at the problem in new book, “Beyond the Big Lie: The Epidemic of Political Lying, Why Republicans Do It More, and How It Could...

“For many years, no political journalist that I’d ever worked with nor myself had ever asked a politician: Why do you lie? And so it’s sort of this topic that is omnipresent and yet never discussed. I decided to discuss it, and I decided to ask politicians about it,” said Adair, the Knight Professor of the Practice of Journalism and Public Policy at Duke University. As a bitterly contested US election campaign enters its final stretch, misinformation researchers have raised the alarm over threats posed by AI and foreign influence — but voters appear more concerned about falsehoods from... The United States is battling a firehose of misinformation before the November 5 vote — from fake “news” sites that researchers say were created by Russian and Iranian actors, to manipulated images generated by... More concerning for voters, however, is misinformation spreading the good old-fashioned way, through politicians sowing falsehoods, with researchers saying they face almost no legal consequences for distorting the truth.

“I think when we do a post-mortem on 2024 the most viral misinformation will have either emanated from politicians or will have been amplified by politicians,” Joshua Tucker, co-director of the New York University... In a survey published last week by Axios, 51 percent of Americans identified politicians spreading falsehoods as their top concern regarding misinformation. AIs are equally persuasive when they’re telling the truth or lying People conversing with chatbots about politics find those that dole out facts more persuasive than other bots, such as those that tell good stories. But these informative bots are also prone to lying. Laundry-listing facts rarely changes hearts and minds – unless a bot is doing the persuading.

Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa. The most persuasive bots don’t need to tell the best story or cater to a person’s individual beliefs, researchers report in a related paper in Science. Instead, they simply dole out the most information. But those bloviating bots also dole out the most misinformation. In August 2023, the survey firm YouGov asked Americans how concerned they are about various potential consequences arising from artificial intelligence (AI).

Topping the list, 85 percent of respondents said that they are “very concerned” or “somewhat concerned” about the spread of misleading video and audio deepfakes. This finding is unsurprising given frequent news headlines such as “AI ‘Deepfakes’ Poised to Wreak Havoc on 2024 Election” and “Deepfaking It: America’s 2024 Election Collides with AI Boom.” As the introduction to the... Problematically, however, concern about deepfakes poses a threat of its own: unscrupulous public figures or stakeholders can use this heightened awareness to falsely claim that legitimate audio content or video footage is artificially generated... Law professors Bobby Chesney and Danielle Citron call this dynamic the liar’s dividend. They posit that liars aiming to avoid accountability will become more believable as the public becomes more educated about the threats posed by deepfakes. The theory is simple: when people learn that deepfakes are increasingly realistic, false claims that real content is AI-generated become more persuasive too.

This essay explores these would-be liars’ incentives and disincentives to better understand when they might falsely claim artificiality, and the interventions that can render those claims less effective. Politicians will presumably continue to use the threat of deepfakes to try to avoid accountability for real actions, but that outcome need not upend democracy’s epistemic foundations. Establishing norms against these lies, further developing and disseminating technology to determine audiovisual content’s provenance, and bolstering the public’s capacity to discern the truth can all blunt the benefits of lying and thereby reduce... Granted, politicians may instead turn to less forceful assertions, opting for indirect statements to raise uncertainty over outright denials or allowing their representatives to make direct or indirect claims on their behalf. But the same interventions can hamper these tactics as well. Manipulating audiovisual media is no new feat, but advancements in deep learning have spawned tools that anyone can use to produce deepfakes quickly and cheaply.

Research scientist Shruti Agarwal and coauthors write of three common deepfake video approaches, which they call face swap, lip sync, and puppet master. In a face swap, one person’s face in a video is replaced with another’s. In a lip sync, a person’s mouth is altered to match an audio recording. And in a puppet master–style deepfake, a target person is actually animated by a performer in front of a camera. Audio-only deepfakes, which do not involve a visual element, are also becoming more prevalent. Although a review of the technical literature falls outside the scope of this essay, suffice it to say that technical innovations are yielding deepfakes ever more able to fool viewers.

Not every deepfake will be convincing; in many cases, they will not be. Yet malcontents have successfully used deepfakes to scam banks and demand ransoms for purportedly kidnapped family members. When I founded PolitiFact, I thought fact-checking would make politicians more truthful. We need to think bigger. For American politicians, this is a golden age of lying. Social media allows them to spread mendacity with speed and efficiency, while supporters amplify any falsehood that serves their cause.

When I launched PolitiFact in 2007, I thought we were going to raise the cost of lying. I didn’t expect to change people’s votes just by calling out candidates, but I was hopeful that our journalism would at least nudge them to be more truthful. I was wrong. More than 15 years of fact-checking has done little or nothing to stem the flow of lies. I underestimated the strength of the partisan media on both sides, particularly conservative outlets, which relentlessly smeared our work. (A typical insult: “The fact-checkers are basically just a P.R.

arm of the Democrats at this point.”) PolitiFact and other media organizations published thousands of checks, but as time went on, Republican representatives and voters alike ignored our journalism more and more, or dismissed... Democrats sometimes did too, of course, but they were more often mindful of our work and occasionally issued corrections when they were caught in a falsehood. Lying is ubiquitous, yet politicians are rarely asked why they do it. Maybe journalists think the reason is obvious; many are reluctant to even use the word lie, because it invites confrontation and demands proof. But the answer could help us address the problem. So I spent the past four years asking members of Congress, political operatives, local officials, congressional staffers, White House aides, and campaign consultants this simple question: Why do politicians lie?

In a way, these conversations made me hopeful that officials from both parties might curtail their lying if we find ways to change their incentives. The decision to lie can be reduced to something like a point system: If I tell this lie, will I score enough support and attention from my voters, my party leaders, and my corner... “There is a base to play to, a narrative to uphold or reinforce,” said Cal Cunningham, a Democrat who lost a Senate race in North Carolina in 2020 after acknowledging that he had been... “There is an advantage that comes from willfully misstating the truth that is judged to be greater than the disadvantage that may come from telling the truth. I think there’s a lot of calculus in it.” Jim Kolbe, a former Republican member of Congress from Arizona who has since left the party, described the advantage more vividly: A lie “arouses and... In a new political ad in Georgia’s Senate race, GOP Rep.

Mike Collins‘ campaign released a video featuring incumbent Democratic Sen. Jon Ossoff saying he knows his vote to shut down the government will hurt farmers: “But I wouldn’t know. I’ve only seen a farm on Instagram.” Ossoff never said any of this. When challenged on spreading disinformation using Ossoff’s likeness and voice, Collins’ campaign doubled down, saying they were pleased the ad sparked conversation — proving they were either oblivious to the dangerous precedent or had... While political cartoonists have long created derogatory or lampoonish images of elected officials and candidates for public office, the political imagery that can be created by artificial intelligence blurs truth and fiction in unprecedented... AI can make falsehoods look authentic and, when used by politicians themselves, it becomes particularly harmful.

AI use that started as experimentation by campaigns has evolved into something far more troubling: It now merges satire, disinformation and official messaging that misleads voters and distorts democratic discourse. In New York City’s recent mayoral race, former Democratic Gov. Andrew Cuomo‘s campaign released an ad on social media, which was later deleted, featuring purported “criminals for Zohran Mamdani” — a parade of racist caricatures that included a pimp in a purple suit, along... In one sequence, a Black man shoplifts from a bodega, his face visibly morphing mid-clip as he puts on a keffiyeh and mask before robbing the store. As AI tools grow more sophisticated, Mamdani’s election may serve as both a warning and a testament: A warning of how easily political imagery can be weaponized, and a testament to the electorate’s enduring... In recent weeks we have seen the official X account of the National Republican Senatorial Committee post a video of Senate Minority Leader Chuck Schumer, D-N.Y., also talking about the government shutdown.

“Every day gets better for us,” the AI-generated video says. While the quote is accurate, the image of Schumer maniacally grinning as he says it is completely fabricated. Taking such creative license with elected officials’ images confuses the electorate about what is real. And, even if the Cuomo campaign claimed their ad represented their genuine beliefs about Mamdani supporters, portraying AI-generated individuals as real people destroys voters’ ability to distinguish fact from propaganda. It feeds the worst instincts in our politics — rewarding deception over debate, spectacle over substance. AI’s Role in Election Misinformation: Less Than Meets the Eye?

Recent anxieties about artificial intelligence destabilizing elections through the proliferation of political misinformation may be exaggerated, according to groundbreaking research conducted by computer scientist Arvind Narayanan, director of the Princeton Center for Information Technology... candidate at the same institution. Their findings, gleaned from an analysis of 78 instances of AI-generated political content during elections worldwide last year, challenge the prevailing narrative of AI as a primary driver of electoral manipulation. The researchers, currently authoring a book titled "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference," leveraged data compiled by the WIRED AI Elections Project for... Their conclusion: while AI undeniably facilitates the creation of false content, it hasn’t fundamentally altered the landscape of political misinformation. Contrary to popular perception, Narayanan and Kapoor discovered that a significant portion of the AI-generated content they examined lacked deceptive intent.

In nearly half of the cases, the utilization of AI was geared towards enhancing campaign materials, rather than disseminating fabricated information. This finding underscores the versatility of AI tools and their potential for constructive applications in the political sphere. The researchers also documented innovative uses of AI, such as journalists employing AI avatars to circumvent government retribution when reporting on sensitive political issues, and a candidate resorting to AI voice cloning to communicate... These examples highlight the diverse and evolving ways in which AI is being integrated into political processes. Furthermore, the research reveals that creating deceptive content doesn’t necessarily hinge on the use of AI. Narayanan and Kapoor assessed the cost of replicating the deceptive content in their sample without utilizing AI, by employing human professionals like Photoshop experts, video editors, or voice actors.

In each instance, they found that the cost remained relatively modest, often within a few hundred dollars. This suggests that traditional methods of creating false information remain readily accessible and affordable, even without the aid of sophisticated AI technology. In a revealing anecdote, the researchers even identified a video featuring a hired actor that had been mistakenly classified as AI-generated content in WIRED’s database, underscoring the difficulty in distinguishing between AI-generated and traditionally... This research prompts a shift in focus from the supply of misinformation to the demand for it. The researchers argue that addressing the root causes of misinformation, which predate the advent of AI, is crucial. While AI may alter the methods of production, it doesn’t fundamentally change the mechanisms of dissemination or the impact of misinformation.

People Also Search

Stephanie Mitchell/Harvard Staff Photographer Founder Of PolitiFact Discusses Case Studies

Stephanie Mitchell/Harvard Staff Photographer Founder of PolitiFact discusses case studies from his new book that reveal how we got to where we are now Many Americans feel like the spin and outright lying in politics has gotten worse in recent decades. And that it’s not a good thing. Bill Adair agrees. The founder of PolitiFact, the Pulitzer Prize-winning, fact-checking website, looks at the probl...

“For Many Years, No Political Journalist That I’d Ever Worked

“For many years, no political journalist that I’d ever worked with nor myself had ever asked a politician: Why do you lie? And so it’s sort of this topic that is omnipresent and yet never discussed. I decided to discuss it, and I decided to ask politicians about it,” said Adair, the Knight Professor of the Practice of Journalism and Public Policy at Duke University. As a bitterly contested US elec...

“I Think When We Do A Post-mortem On 2024 The

“I think when we do a post-mortem on 2024 the most viral misinformation will have either emanated from politicians or will have been amplified by politicians,” Joshua Tucker, co-director of the New York University... In a survey published last week by Axios, 51 percent of Americans identified politicians spreading falsehoods as their top concern regarding misinformation. AIs are equally persuasive...

Briefly Chatting With An AI Moved Potential Voters In Three

Briefly chatting with an AI moved potential voters in three countries toward their less preferred candidate, researchers report December 4 in Nature. That finding held true even in the lead-up to the contentious 2024 presidential election between Donald Trump and Kamala Harris, with pro-Trump bots pushing Harris voters in his direction, and vice versa. The most persuasive bots don’t need to tell t...

Topping The List, 85 Percent Of Respondents Said That They

Topping the list, 85 percent of respondents said that they are “very concerned” or “somewhat concerned” about the spread of misleading video and audio deepfakes. This finding is unsurprising given frequent news headlines such as “AI ‘Deepfakes’ Poised to Wreak Havoc on 2024 Election” and “Deepfaking It: America’s 2024 Election Collides with AI Boom.” As the introduction to the... Problematically, ...