Deepfakes Elections And Shrinking The Liar S Dividend

Bonisiwe Shabane
-
deepfakes elections and shrinking the liar s dividend

In August 2023, the survey firm YouGov asked Americans how concerned they are about various potential consequences arising from artificial intelligence (AI). Topping the list, 85 percent of respondents said that they are “very concerned” or “somewhat concerned” about the spread of misleading video and audio deepfakes. This finding is unsurprising given frequent news headlines such as “AI ‘Deepfakes’ Poised to Wreak Havoc on 2024 Election” and “Deepfaking It: America’s 2024 Election Collides with AI Boom.” As the introduction to the... Problematically, however, concern about deepfakes poses a threat of its own: unscrupulous public figures or stakeholders can use this heightened awareness to falsely claim that legitimate audio content or video footage is artificially generated... Law professors Bobby Chesney and Danielle Citron call this dynamic the liar’s dividend. They posit that liars aiming to avoid accountability will become more believable as the public becomes more educated about the threats posed by deepfakes.

The theory is simple: when people learn that deepfakes are increasingly realistic, false claims that real content is AI-generated become more persuasive too. This essay explores these would-be liars’ incentives and disincentives to better understand when they might falsely claim artificiality, and the interventions that can render those claims less effective. Politicians will presumably continue to use the threat of deepfakes to try to avoid accountability for real actions, but that outcome need not upend democracy’s epistemic foundations. Establishing norms against these lies, further developing and disseminating technology to determine audiovisual content’s provenance, and bolstering the public’s capacity to discern the truth can all blunt the benefits of lying and thereby reduce... Granted, politicians may instead turn to less forceful assertions, opting for indirect statements to raise uncertainty over outright denials or allowing their representatives to make direct or indirect claims on their behalf. But the same interventions can hamper these tactics as well.

Manipulating audiovisual media is no new feat, but advancements in deep learning have spawned tools that anyone can use to produce deepfakes quickly and cheaply. Research scientist Shruti Agarwal and coauthors write of three common deepfake video approaches, which they call face swap, lip sync, and puppet master. In a face swap, one person’s face in a video is replaced with another’s. In a lip sync, a person’s mouth is altered to match an audio recording. And in a puppet master–style deepfake, a target person is actually animated by a performer in front of a camera. Audio-only deepfakes, which do not involve a visual element, are also becoming more prevalent.

Although a review of the technical literature falls outside the scope of this essay, suffice it to say that technical innovations are yielding deepfakes ever more able to fool viewers. Not every deepfake will be convincing; in many cases, they will not be. Yet malcontents have successfully used deepfakes to scam banks and demand ransoms for purportedly kidnapped family members. Julian Jacobs, Francesco Tasin, AJ Mannan Melanie W. Sisson, Colin Kahl, Sun Chenghao, Xiao Qian

“The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability,” recently published research in the American Political Science Review by Dr. Kaylyn Jackson Schiff, Dr. Daniel S. Schiff, and Dr. Natália S. Bueno, is gaining attention from journalists and think tanks as they seek to understand the potential impacts of artificial intelligence on elections.

In a CNN interview with Michael Smerconish, Kaylyn explains that their research found evidence that politicians can retain voter support by claiming negative stories about them are “fake news,” exploiting widespread confusion around AI-generated... By falsely claiming to be a target of a misinformation campaign, candidates facing real scandals can create uncertainty in the minds of voters about whether the scandal actually occurred and can even rally their... Notably, they find that false claims of misinformation are more effective than other types of responses to a scandal, such as apologizing or remaining silent. While the findings are concerning, Kaylyn notes that there are approaches to combatting this type of misinformation-about-misinformation through fact checking, and she notes that there are emerging efforts around watermarking AI-generated images, video, audio,... The research article was covered in Political Science Now, and the authors also wrote a commentary piece for the Brookings Institution about their findings, titled “Watch out for false claims of deepfakes, and actual... This research is part of a broader agenda in the Governance and Responsible AI Lab (GRAIL) at Purdue co-directed by Kaylyn and Daniel.

GRAIL supports multiple research projects investigating the social, ethical, and governance implications of AI. Their recent article in The Conversation–“Generative AI like ChatGPT could help boost democracy – if it overcomes key hurdles”–explores potential benefits of generative AI for civic knowledge and constituent communication, but cautions that AI... Other projects include building the AI Governance and Regulatory Archive (AGORA) and the Political Deepfakes Incidents Database. Their research has been funded by the National Institute of Justice, Google, and Arnold Ventures. Their work appears in leading journals in the fields of public policy, public administration, political science, criminology, and education. Copyright 2025 Purdue University.

All Rights Reserved. Accessibility | EA/EO University | Integrity Statement | Free Expression | DOE Degree Scorecards | Privacy Policy | Contact Us Digital Misinformation: How Deepfakes Empower the ‘Liar’s Dividend’ in a Post-Truth Era In our increasingly digitized world, the proliferation of misinformation poses a significant threat to democratic processes, social cohesion, and individual trust. One of the most potent weapons in the arsenal of misinformation is the deepfake, a sophisticated form of artificial intelligence-generated media that can seamlessly manipulate audio and video content. Deepfakes can fabricate events that never happened, put words into the mouths of individuals who never spoke them, and create realistic depictions of scenarios entirely divorced from reality.

This technology empowers what is known as the "liar’s dividend," whereby individuals accused of wrongdoing can simply dismiss genuine evidence as fabricated deepfakes, eroding public trust and creating a climate of skepticism where discerning... The consequences of this phenomenon are far-reaching and demand urgent attention. The rise of deepfakes exacerbates the existing challenges of combating misinformation in the digital age. Previously, doctored images or manipulated videos often contained subtle inconsistencies that allowed for detection and debunking. However, deepfakes, with their advanced algorithms and ever-increasing realism, blur the line between fact and fiction to an unprecedented degree. This makes it significantly harder for individuals, journalists, and even experts to differentiate between authentic media and synthetic fabrications.

The ease with which deepfakes can be created and disseminated further compounds the problem, allowing malicious actors to spread disinformation rapidly and widely across social media platforms and online networks. This ease of creation and dissemination democratizes misinformation, making it a readily available tool for anyone with an agenda, from political operatives to disgruntled individuals seeking revenge. The liar’s dividend, amplified by deepfakes, poses a substantial threat to accountability and justice. When individuals, particularly public figures or those in positions of power, can dismiss legitimate accusations as deepfake fabrications, it becomes increasingly difficult to hold them responsible for their actions. This creates a permissive environment for unethical behavior and erodes public trust in institutions and authorities. Furthermore, the mere possibility of deepfakes being used to discredit legitimate claims can create a chilling effect, discouraging individuals from coming forward with evidence of wrongdoing for fear of being dismissed as purveyors of...

This chilling effect can undermine investigative journalism, whistleblowing, and other critical mechanisms for holding power accountable. The societal implications of the liar’s dividend extend beyond the political and legal spheres. Deepfakes can be used to manipulate public opinion, incite violence, and sow discord within communities. Fabricated videos depicting individuals engaging in hateful or criminal acts can be used to fuel prejudice and discrimination. The spread of such manipulated content can exacerbate existing social tensions and erode trust between different groups, contributing to a climate of fear and paranoia. Moreover, the widespread availability of deepfake technology raises concerns about its potential misuse in personal relationships, where fabricated videos could be used for blackmail, harassment, or revenge pornography.

The emotional and psychological damage inflicted by such malicious use of deepfakes can be devastating. In July 2024, a deepfake video of then-Democratic presidential nominee Kamala Harris describing herself as the "ultimate diversity hire" spread rapidly across social media[1]. While this particular hoax was quickly debunked, it raised a troubling question: What if the deceptions weren’t so obvious? Imagine a deepfake video of her engaged in corruption—by the time the truth emerged, the damage would already be done. Can democracy withstand the onslaught of deepfakes? How can we regulate them without undermining free speech?

The Dangers of Deepfakes: Blurring the Line Between Truth and Fake Deepfakes are AI-generated videos, images, or audio clips that manipulate real footage to create highly realistic but entirely false depictions of people. While misinformation in election campaigns is nothing new, deepfakes amplify its impact in unprecedented ways. Deepfakes have the potential to completely erase the line between truth and fabrication, making fake narratives appear hyper-real and far more difficult to detect and debunk than traditional photoshopped images or misleading quotes. Beyond creating falsehoods, deepfakes undermine real evidence, leading to what’s known as the "liar’s dividend[2]"—allowing liars to dismiss truth as fake. For example, Elon Musk’s legal team recently suggested in court that past statements he made about Tesla’s self-driving capabilities could have been deepfakes[3].

If individuals can simply dismiss incriminating audio or video as an AI-generated hoax, it will become increasingly difficult to hold public figures accountable. The greatest danger isn’t just the creation of fake content—it’s a world where no content can be trusted. As deepfakes erode our collective ability to believe what we see and hear, the very foundation of democracy—an informed electorate—will be at risk. Deepfakes—highly realistic, AI‑generated fake videos, images and audio—are rapidly reshaping how we decide what’s real. Once, a photo or recording felt like solid proof; now, almost anything we see or hear online could be digitally fabricated or manipulated. This article explores how deepfake technology works, the real harms already happening (from non‑consensual sexual content to financial fraud and political disinformation), and the more subtle damage it does by eroding public trust and...

It also looks at emerging laws, detection tools, watermarking and content‑provenance standards, and offers practical steps for individuals, companies and policymakers. In a world of synthetic media, the future of truth won’t depend on what looks convincing on a screen, but on how well we verify sources, protect people’s likenesses and rebuild shared standards for... A deepfake is a type of synthetic media created using artificial intelligence—especially deep learning—to generate or manipulate images, video, or audio so that they convincingly depict something that never happened. Encyclopaedia Britannica describes deepfakes as AI‑generated media that portray non‑existent events or people, often by combining “deep” learning with “fake” content. Many experts view deepfakes as a subset of synthetic media, which also includes AI‑generated text, music, and images. In a new political ad in Georgia’s Senate race, GOP Rep.

Mike Collins‘ campaign released a video featuring incumbent Democratic Sen. Jon Ossoff saying he knows his vote to shut down the government will hurt farmers: “But I wouldn’t know. I’ve only seen a farm on Instagram.” Ossoff never said any of this. When challenged on spreading disinformation using Ossoff’s likeness and voice, Collins’ campaign doubled down, saying they were pleased the ad sparked conversation — proving they were either oblivious to the dangerous precedent or had... While political cartoonists have long created derogatory or lampoonish images of elected officials and candidates for public office, the political imagery that can be created by artificial intelligence blurs truth and fiction in unprecedented... AI can make falsehoods look authentic and, when used by politicians themselves, it becomes particularly harmful.

AI use that started as experimentation by campaigns has evolved into something far more troubling: It now merges satire, disinformation and official messaging that misleads voters and distorts democratic discourse. In New York City’s recent mayoral race, former Democratic Gov. Andrew Cuomo‘s campaign released an ad on social media, which was later deleted, featuring purported “criminals for Zohran Mamdani” — a parade of racist caricatures that included a pimp in a purple suit, along... In one sequence, a Black man shoplifts from a bodega, his face visibly morphing mid-clip as he puts on a keffiyeh and mask before robbing the store. As AI tools grow more sophisticated, Mamdani’s election may serve as both a warning and a testament: A warning of how easily political imagery can be weaponized, and a testament to the electorate’s enduring... In recent weeks we have seen the official X account of the National Republican Senatorial Committee post a video of Senate Minority Leader Chuck Schumer, D-N.Y., also talking about the government shutdown.

People Also Search

In August 2023, The Survey Firm YouGov Asked Americans How

In August 2023, the survey firm YouGov asked Americans how concerned they are about various potential consequences arising from artificial intelligence (AI). Topping the list, 85 percent of respondents said that they are “very concerned” or “somewhat concerned” about the spread of misleading video and audio deepfakes. This finding is unsurprising given frequent news headlines such as “AI ‘Deepfake...

The Theory Is Simple: When People Learn That Deepfakes Are

The theory is simple: when people learn that deepfakes are increasingly realistic, false claims that real content is AI-generated become more persuasive too. This essay explores these would-be liars’ incentives and disincentives to better understand when they might falsely claim artificiality, and the interventions that can render those claims less effective. Politicians will presumably continue t...

Manipulating Audiovisual Media Is No New Feat, But Advancements In

Manipulating audiovisual media is no new feat, but advancements in deep learning have spawned tools that anyone can use to produce deepfakes quickly and cheaply. Research scientist Shruti Agarwal and coauthors write of three common deepfake video approaches, which they call face swap, lip sync, and puppet master. In a face swap, one person’s face in a video is replaced with another’s. In a lip syn...

Although A Review Of The Technical Literature Falls Outside The

Although a review of the technical literature falls outside the scope of this essay, suffice it to say that technical innovations are yielding deepfakes ever more able to fool viewers. Not every deepfake will be convincing; in many cases, they will not be. Yet malcontents have successfully used deepfakes to scam banks and demand ransoms for purportedly kidnapped family members. Julian Jacobs, Fran...

“The Liar’s Dividend: Can Politicians Claim Misinformation To Evade Accountability,”

“The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability,” recently published research in the American Political Science Review by Dr. Kaylyn Jackson Schiff, Dr. Daniel S. Schiff, and Dr. Natália S. Bueno, is gaining attention from journalists and think tanks as they seek to understand the potential impacts of artificial intelligence on elections.