The Influence Of Ai Generated Political Misinformation On Elections

Bonisiwe Shabane
-
the influence of ai generated political misinformation on elections

AI’s Role in Election Misinformation: Less Than Meets the Eye? Recent anxieties about artificial intelligence destabilizing elections through the proliferation of political misinformation may be exaggerated, according to groundbreaking research conducted by computer scientist Arvind Narayanan, director of the Princeton Center for Information Technology... candidate at the same institution. Their findings, gleaned from an analysis of 78 instances of AI-generated political content during elections worldwide last year, challenge the prevailing narrative of AI as a primary driver of electoral manipulation. The researchers, currently authoring a book titled "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference," leveraged data compiled by the WIRED AI Elections Project for... Their conclusion: while AI undeniably facilitates the creation of false content, it hasn’t fundamentally altered the landscape of political misinformation.

Contrary to popular perception, Narayanan and Kapoor discovered that a significant portion of the AI-generated content they examined lacked deceptive intent. In nearly half of the cases, the utilization of AI was geared towards enhancing campaign materials, rather than disseminating fabricated information. This finding underscores the versatility of AI tools and their potential for constructive applications in the political sphere. The researchers also documented innovative uses of AI, such as journalists employing AI avatars to circumvent government retribution when reporting on sensitive political issues, and a candidate resorting to AI voice cloning to communicate... These examples highlight the diverse and evolving ways in which AI is being integrated into political processes. Furthermore, the research reveals that creating deceptive content doesn’t necessarily hinge on the use of AI.

Narayanan and Kapoor assessed the cost of replicating the deceptive content in their sample without utilizing AI, by employing human professionals like Photoshop experts, video editors, or voice actors. In each instance, they found that the cost remained relatively modest, often within a few hundred dollars. This suggests that traditional methods of creating false information remain readily accessible and affordable, even without the aid of sophisticated AI technology. In a revealing anecdote, the researchers even identified a video featuring a hired actor that had been mistakenly classified as AI-generated content in WIRED’s database, underscoring the difficulty in distinguishing between AI-generated and traditionally... This research prompts a shift in focus from the supply of misinformation to the demand for it. The researchers argue that addressing the root causes of misinformation, which predate the advent of AI, is crucial.

While AI may alter the methods of production, it doesn’t fundamentally change the mechanisms of dissemination or the impact of misinformation. Narayanan and Kapoor emphasize the importance of recognizing that successful misinformation campaigns often target individuals already aligned with the message’s core intent. These "in-group" members are more susceptible to believing and amplifying misinformation, regardless of its source or production method. Sophisticated technologies, including AI, aren’t essential for misinformation to flourish in such contexts. We surveyed 1,000 U.S. adults to understand concerns about the use of artificial intelligence (AI) during the 2024 U.S.

presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation. Our findings suggest that direct interactions with AI tools like ChatGPT and DALL-E were not correlated with these concerns, regardless of education or STEM work experience. Instead, news consumption, particularly through television, appeared more closely linked to heightened concerns. These results point to the potential influence of news media and the importance of exploring AI literacy and balanced reporting. Stanford Social Media Lab, Stanford University, USA

Department of Political Science, Northeastern University, USA Network Science Institute, Northeastern University, USA School of Journalism, Northeastern University, USA Norman Eisen, Renée Rippberger, Jonathan Katz Renée Rippberger, Rachel Beatty Riedl, Jonathan Katz, Paul Friesen, Noam Lupu, Marie Miller, Caroline Macneill, Randi Wright, Alexandra Rumford From Bangladesh to Slovakia, AI-generated deepfakes have been undermining elections around the globe.

Experts say their reach and sophistication is a sign of things to come in consequential elections later this year. (March 15) People are reflected in a window of a hotel at the Davos Promenade in Davos, Switzerland, Jan. 15, 2024. (AP Photo/Markus Schreiber, File) FILE - People engage with their mobile phones at Dhaka University area, Bangladesh, Dec.21, 2023.

Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone to create fake – but convincing – content aimed at fooling voters. People in countries with low literacy rates, such as Bangladesh and India, are especially vulnerable to social media misinformation. (AP Photo/Mahmud Hossain Opu, File) Rumeen Farhana, a politician from the main opposition Bangladesh Nationalist Party (BNP) sits for a photograph during an interview at her residence in Dhaka, Bangladesh, Thursday, Feb. 15, 2024. Farhana, a vocal critic of the ruling party, was falsely depicted wearing a bikini in a video created using artificial intelligence.

The viral video sparked outrage in the conservative, majority-Muslim nation. (AP Photo/Al-emrun Garjon) FILE - Moldova’s President Maia Sandu, right, greets Ukraine’s President Volodymyr Zelenskyy in Bulboaca, Moldova, June 1, 2023. She has been a frequent target of online disinformation created with artificial intelligence. (AP Photo/Vadim Ghirda, File) AI deepfakes are spreading faster than India’s election safeguards can keep up, and experts warn this may be the most vulnerable election cycle yet.

Artificial intelligence has entered electoral politics at a speed that regulators simply did not anticipate. Over the past year, several high-profile incidents worldwide have shown how AI tools can manufacture persuasive political misinformation with almost no cost or effort. In early 2024, voters in the United States received a fake robocall mimicking President Joe Biden’s voice, an incident confirmed by the New Hampshire Attorney General and widely reported by The New York Times. Around the same time, Slovakia suffered a major disinformation surge when a deepfake audio clip circulated just before its elections, allegedly influencing voter sentiment; both Reuters and BBC News covered the fallout extensively. These events highlight a core problem: governments are still using analog-era safeguards against digital-era threats. India faces a sharper version of this challenge simply because of its digital landscape and scale.

With more than 820 million internet users and some of the world’s most active WhatsApp and Instagram populations, India provides fertile ground for rapid misinformation spread. The fact that political outreach in India heavily relies on short videos, forwards, and influencer-style messaging makes the environment even more vulnerable. David Klepper, Associated Press David Klepper, Associated Press Ali Swenson, Associated Press Ali Swenson, Associated Press WASHINGTON (AP) — Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough... The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media.

The threat posed by AI and so-called deepfakes always seemed a year or two away. Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low. As more AI-generated content seeps into the information ecosystem, Professor Andrew B. Hall fears it could contaminate our political discourse and democratic processes. Hall, a professor of political economy, has spent much of his career researching democratic systems and political polarization within them.

“I don’t think we know how [AI is] affecting polarization yet,” he says in this bonus episode of If/Then: Business, Leadership, Society. What is clear is that AI “could be fairly disruptive to the workings of our electoral system in the pretty near future.” With a presidential election fast approaching, Hall sees several ways that AI could muddy the political waters. As misleading or fake content is generated and distributed at scale, “people could be more misinformed and make decisions they wouldn’t otherwise about who to vote for,” he says. Even if that misinformation is not created, people’s belief that it’s out there could change the election outcomes. “That itself [is] a risk to the system,” Hall says.

“The more people don’t believe that the whole process around our democracy is fair or has integrity, the less likely they are to accept outcomes or to buy into the society that they’re part... However, Hall also sees ways that AI could provide solutions to some of the problems that beset the political system. As this episode of If/Then explores, if we want to distinguish fact from fiction and maintain trust in our democracy, then we must understand AI’s impact on our political landscape, in the 2024 election... If/Then is a podcast from Stanford Graduate School of Business that examines research findings that can help us navigate the complex issues we face in business, leadership, and society. Each episode features an interview with a Stanford GSB faculty member. The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions.

The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem. Over time, the misuse of these tools is eroding public trust in elections by making it harder to distinguish fact from fiction, intensifying polarization, and undermining confidence in democratic institutions. Understanding and addressing the threats that AI poses requires us to consider both its immediate effects on U.S. elections and its broader, long-term implications. Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like...

Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X. Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely accusing vice-presidential nominee Minnesota Gov. Tim Walz of assault. Similar stories emerged around elections worldwide. In India’s 2024 general elections, AI-generated deepfakes that showed celebrities criticizing Prime Minister Narendra Modi and endorsing opposition parties went viral on platforms such as WhatsApp and YouTube. During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp.

While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-generated content becomes more sophisticated and pervasive. The long-term consequences of AI-driven disinformation go beyond eroding trust — they create a landscape where truth itself becomes contested. As deepfakes and manipulated content grow more sophisticated, bad actors can exploit the confusion, dismissing real evidence as fake and muddying public discourse. This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence.

People Also Search

AI’s Role In Election Misinformation: Less Than Meets The Eye?

AI’s Role in Election Misinformation: Less Than Meets the Eye? Recent anxieties about artificial intelligence destabilizing elections through the proliferation of political misinformation may be exaggerated, according to groundbreaking research conducted by computer scientist Arvind Narayanan, director of the Princeton Center for Information Technology... candidate at the same institution. Their f...

Contrary To Popular Perception, Narayanan And Kapoor Discovered That A

Contrary to popular perception, Narayanan and Kapoor discovered that a significant portion of the AI-generated content they examined lacked deceptive intent. In nearly half of the cases, the utilization of AI was geared towards enhancing campaign materials, rather than disseminating fabricated information. This finding underscores the versatility of AI tools and their potential for constructive ap...

Narayanan And Kapoor Assessed The Cost Of Replicating The Deceptive

Narayanan and Kapoor assessed the cost of replicating the deceptive content in their sample without utilizing AI, by employing human professionals like Photoshop experts, video editors, or voice actors. In each instance, they found that the cost remained relatively modest, often within a few hundred dollars. This suggests that traditional methods of creating false information remain readily access...

While AI May Alter The Methods Of Production, It Doesn’t

While AI may alter the methods of production, it doesn’t fundamentally change the mechanisms of dissemination or the impact of misinformation. Narayanan and Kapoor emphasize the importance of recognizing that successful misinformation campaigns often target individuals already aligned with the message’s core intent. These "in-group" members are more susceptible to believing and amplifying misinfor...

Presidential Election And Public Perceptions Of AI-driven Misinformation. Four Out

presidential election and public perceptions of AI-driven misinformation. Four out of five respondents expressed some level of worry about AI’s role in election misinformation. Our findings suggest that direct interactions with AI tools like ChatGPT and DALL-E were not correlated with these concerns, regardless of education or STEM work experience. Instead, news consumption, particularly through t...