Has The Ai Election Threat Materialized Nyu S Center For Social

Bonisiwe Shabane
-
has the ai election threat materialized nyu s center for social

Co-sponsored by the Brennan Center for Justice, this virtual event explored what we've learned about AI in the 2024 race — and what else we should look for in the weeks before (and after)... Since ChatGPT first launched nearly two years ago, many have claimed the rise of AI would pose a significant threat to elections. Reports warned that a surge of AI-generated disinformation could undermine democracy. Intelligence officials worried that foreign actors would use AI to disrupt the electoral process. Americans agreed, with more than half saying AI could impact who will win in November. But have these threats actually materialized?

So far, we haven't seen a deluge of deepfakes, and most are quickly spotted and debunked. Perhaps the warnings did their job. Election officials have worked vigorously to prepare for AI-related threats, and the media have reported extensively on the problem, priming Americans to be skeptical of what they see online. In this event, we convened experts from a variety of backgrounds — across government, civil society, academia, and media — to discuss what's happened with AI in the 2024 race, what we've learned from... Co-sponsored by NYU's Center for Social Media and Politics and the Brennan Center for Justice at NYU School of Law Since ChatGPT first launched nearly two years ago, many have claimed the rise of AI would pose a significant threat to elections.

Reports warned that a surge of AI-generated disinformation could undermine democracy. Intelligence officials worried that foreign actors would use AI to disrupt the electoral process. Americans agreed, with more than half saying AI could impact who will win in November. Copyright © MediaWell / About / Powered by Research AMP – a product of the Social Science Research Council Since ChatGPT’s launch nearly two years ago, there’s been widespread concern that AI would pose a significant threat to elections. At yesterday's event with the Brennan Center for Justice, our expert panel discussed AI's role in the 2024 race, how election officials are preparing, and AI’s broader impact on trust in the information ecosystem.

Featuring Shannon Bond, Adrian Fontes, Larry Norden, Vivian Schiller, and Joshua Tucker. https://lnkd.in/epyqjz-B The explosive rise of artificial intelligence during the past two years coincides with widespread concerns about the security of American democracy. 2024 will bring the first presidential election of the generative AI era. Americans have many questions: will AI help quell the fears of another hotly debated election outcome, or will it fuel the fire? As generative AI produces output that is increasingly difficult to distinguish from human-created content, how will voters separate fact from misinformation?

The Brennan Center for Justice and Georgetown University’s Center for Security and Emerging Technologies have convened experts to examine critical questions about AI. They will explore how it might impact high-stakes areas like election security, voter suppression, election administration, and political advertising and fundraising. Join us for a live event on Tuesday, November 28, at 6 p.m. ET with a panel ready to break down these complex topics. The conversation will address near-term risks of AI that could become critical in the 2024 election cycle and explore what steps the government, the private sector, and nonprofits should take to minimize the possible... Produced in partnership with Georgetown University’s Center for Security and Emerging Technology

The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions. The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem. Over time, the misuse of these tools is eroding public trust in elections by making it harder to distinguish fact from fiction, intensifying polarization, and undermining confidence in democratic institutions. Understanding and addressing the threats that AI poses requires us to consider both its immediate effects on U.S. elections and its broader, long-term implications.

Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like... Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X. Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely accusing vice-presidential nominee Minnesota Gov. Tim Walz of assault. Similar stories emerged around elections worldwide. In India’s 2024 general elections, AI-generated deepfakes that showed celebrities criticizing Prime Minister Narendra Modi and endorsing opposition parties went viral on platforms such as WhatsApp and YouTube.

During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp. While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-generated content becomes more sophisticated and pervasive. The long-term consequences of AI-driven disinformation go beyond eroding trust — they create a landscape where truth itself becomes contested. As deepfakes and manipulated content grow more sophisticated, bad actors can exploit the confusion, dismissing real evidence as fake and muddying public discourse.

This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence. Over time, this uncertainty weakens democratic institutions, fuels disengagement, and makes societies more vulnerable to manipulation, both from domestic actors and foreign adversaries Greater transparency around AI-generated political advertising would transform researchers' ability to understand its potential effects on democracy and elections. This article was originally published on Brookings. Ahead of the 2024 U.S. election, there was widespread fear that generative artificial intelligence (AI) presented an unprecedented threat to democracy.

Just six weeks before the election, more than half of Americans said they were “extremely or very concerned” that AI would be used to spread misleading information. Intelligence officials warned that these technologies would be used by foreign influence campaigns to undermine trust in democracy, and that growing access to AI tools would lead to a deluge of political deepfakes. This premature, “sky is falling” narrative was based on very little evidence, something we warned about a year ago. But while it seems clear that the worst predictions about AI didn’t come to pass, it’s similarly impetuous to claim that 2024 was the “AI election that wasn’t,” that “we were deepfaked by deepfakes,”... In reality, too little data is available to draw concrete conclusions. We know this because, for the past several months, our research team has tried to build a comprehensive database tracking the use of AI in political communications.

But despite our best efforts, we found this task nearly impossible, in part due to a lack of transparency from online platforms. Overall, we found just 71 examples. Other researchers and journalists have tried to track election-related AI content as well with similar outcomes. But it doesn’t need to be this way. As lawmakers at the state and federal level continue to regulate AI, there are common-sense changes policymakers and platforms can make so that we aren’t flying blind trying to understand what impact AI has... 2024 was supposed to be the year of deepfakes.

But has the AI election threat actually materialized? Join us on October 21, at 12pm ET, for a virtual event — co-sponsored by the Brennan Center for Justice — to discuss what's happened with AI in the 2024 race, what we've learned... Featuring Shannon Bond, Adrian Fontes, Larry Norden, Vivian Schiller, and Joshua Tucker. https://lnkd.in/ehWBYMCT AI is eminently capable of political persuasion and could automate it at a mass scale. We are not prepared.

In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary. It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence. Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease.

AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason. But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. In two large peer-reviewed studies, AI chatbots shifted voters’ views by a substantial margin, far more than traditional political advertising tends to do.

In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply.

People Also Search

Co-sponsored By The Brennan Center For Justice, This Virtual Event

Co-sponsored by the Brennan Center for Justice, this virtual event explored what we've learned about AI in the 2024 race — and what else we should look for in the weeks before (and after)... Since ChatGPT first launched nearly two years ago, many have claimed the rise of AI would pose a significant threat to elections. Reports warned that a surge of AI-generated disinformation could undermine demo...

So Far, We Haven't Seen A Deluge Of Deepfakes, And

So far, we haven't seen a deluge of deepfakes, and most are quickly spotted and debunked. Perhaps the warnings did their job. Election officials have worked vigorously to prepare for AI-related threats, and the media have reported extensively on the problem, priming Americans to be skeptical of what they see online. In this event, we convened experts from a variety of backgrounds — across governme...

Reports Warned That A Surge Of AI-generated Disinformation Could Undermine

Reports warned that a surge of AI-generated disinformation could undermine democracy. Intelligence officials worried that foreign actors would use AI to disrupt the electoral process. Americans agreed, with more than half saying AI could impact who will win in November. Copyright © MediaWell / About / Powered by Research AMP – a product of the Social Science Research Council Since ChatGPT’s launch...

Featuring Shannon Bond, Adrian Fontes, Larry Norden, Vivian Schiller, And

Featuring Shannon Bond, Adrian Fontes, Larry Norden, Vivian Schiller, and Joshua Tucker. https://lnkd.in/epyqjz-B The explosive rise of artificial intelligence during the past two years coincides with widespread concerns about the security of American democracy. 2024 will bring the first presidential election of the generative AI era. Americans have many questions: will AI help quell the fears of ...

The Brennan Center For Justice And Georgetown University’s Center For

The Brennan Center for Justice and Georgetown University’s Center for Security and Emerging Technologies have convened experts to examine critical questions about AI. They will explore how it might impact high-stakes areas like election security, voter suppression, election administration, and political advertising and fundraising. Join us for a live event on Tuesday, November 28, at 6 p.m. ET wit...