How Will Ai Affect The 2024 Election Brennan Center For Justice

Bonisiwe Shabane
-
how will ai affect the 2024 election brennan center for justice

The explosive rise of artificial intelligence during the past two years coincides with widespread concerns about the security of American democracy. 2024 will bring the first presidential election of the generative AI era. Americans have many questions: will AI help quell the fears of another hotly debated election outcome, or will it fuel the fire? As generative AI produces output that is increasingly difficult to distinguish from human-created content, how will voters separate fact from misinformation? The Brennan Center for Justice and Georgetown University’s Center for Security and Emerging Technologies have convened experts to examine critical questions about AI. They will explore how it might impact high-stakes areas like election security, voter suppression, election administration, and political advertising and fundraising.

Join us for a live event on Tuesday, November 28, at 6 p.m. ET with a panel ready to break down these complex topics. The conversation will address near-term risks of AI that could become critical in the 2024 election cycle and explore what steps the government, the private sector, and nonprofits should take to minimize the possible... Produced in partnership with Georgetown University’s Center for Security and Emerging Technology Speakers: David Boies, Benjamin Ginsberg, Barbara Pariente, Wendy Weiser, Michael Waldman The run-up to the 2024 election was marked by predictions that artificial intelligence could trigger dramatic disruptions.

The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem. Over time, the misuse of these tools is eroding public trust in elections by making it harder to distinguish fact from fiction, intensifying polarization, and undermining confidence in democratic institutions. Understanding and addressing the threats that AI poses requires us to consider both its immediate effects on U.S. elections and its broader, long-term implications. Incidents such as robocalls to primary voters in New Hampshire that featured an AI-generated impersonation of President Biden urging them not to vote captured widespread attention, as did misinformation campaigns orchestrated by chatbots like...

Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X. Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely accusing vice-presidential nominee Minnesota Gov. Tim Walz of assault. Similar stories emerged around elections worldwide. In India’s 2024 general elections, AI-generated deepfakes that showed celebrities criticizing Prime Minister Narendra Modi and endorsing opposition parties went viral on platforms such as WhatsApp and YouTube. During Brazil’s 2022 presidential election, deepfakes and bots were used to spread false political narratives on platforms including WhatsApp.

While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-generated content becomes more sophisticated and pervasive. The long-term consequences of AI-driven disinformation go beyond eroding trust — they create a landscape where truth itself becomes contested. As deepfakes and manipulated content grow more sophisticated, bad actors can exploit the confusion, dismissing real evidence as fake and muddying public discourse. This phenomenon, sometimes called the liar’s dividend, enables anyone — politicians, corporations, or other influential figures — to evade accountability by casting doubt on authentic evidence.

Over time, this uncertainty weakens democratic institutions, fuels disengagement, and makes societies more vulnerable to manipulation, both from domestic actors and foreign adversaries Artificial intelligence is poised to bring great change to American democracy, from election management to voter outreach to political advertising and fundraising. AI can be used to improve elections, potentially making it easier to ensure fairness in electoral procedures and fostering a more inclusive and respectful civic discourse. At the same time, it poses significant risks: polluting the information environment with deepfakes and fake news sites, enhancing attacks on election officials and infrastructure, or suppressing Americans’ right to vote. As the Brennan Center works to identify the dangers and opportunities AI poses, we are detailing steps the public and private sectors should take to deal with the effects of AI on our elections... You can also learn more in our AI and Democracy series >>

The year 2024 began with bold predictions about how the United States would see its first artificial intelligence (AI) election. 1 Commentators worried that generative AI — a branch of AI that can create new images, audio, video, and text — could produce deepfakes that would so inundate users of social media that they... 2 Meanwhile, some self-labeled techno-optimists proselytized how AI could revolutionize voter outreach and fundraising, thereby leveling the playing field for campaigns that otherwise could not afford expensive political consultants and staff. 3 As the election played out, AI was employed in numerous ways: Foreign adversaries used the technology to augment their election interference by creating copycat news sites filled with what appeared to be AI-generated fake... 4 Campaigns leveraged deepfake technology to convincingly imitate politicians and produce misleading advertisements.

5 Activists deployed AI systems to support voter suppression efforts. 6 Candidates and supporters used AI tools to build political bot networks, translate materials, design eye-catching memes, and assist in voter outreach. 7 And election officials experimented with AI to draft social media content and provide voters with important information like polling locations and hours of operation. 8 Of course, AI likely was also used during this election in ways that have not yet come into focus and may only be revealed months or even years from now. Were the fears and promises overhyped? Yes and no.

It would be a stretch to claim that AI transformed U.S. elections last year to either effect, and the worst-case scenarios did not come to pass. 9 But AI did play a role that few could have imagined a mere two years ago, and a review of that role offers some important clues as to how, as the technology becomes... elections — and American democracy more broadly — in the coming years. AI promises to transform how government interacts with and represents its citizens, and how government understands and interprets the will of its people. 10 Revelations that emerge about AI’s applications in 2024 can offer lessons about the guardrails and incentives that must be put in place now — lest even more advanced iterations of the technology be...

elections and democratic governance as a whole. This report lays out the Brennan Center’s vision for how policymakers can ensure that AI’s inevitable changes strengthen rather than weaken the open, responsive, accountable, and representative democracy that all Americans deserve. Now is the time for policymakers at all levels to think deliberately and expansively about how to minimize AI’s dangers and increase its pro-democracy potential. That means more than just passing new laws and regulations that relate directly to election operations. It also includes holding AI developers and tech companies accountable for their products’ capacities to influence how people perceive facts and investing in the resources (including workforces and tools) and audit regimes that will... Policymakers should also establish guardrails for election officials and other public servants that allow them to use AI in ways that improve efficiency, responsiveness, and accountability while not inadvertently falling prey to the technology’s...

As artificial intelligence tools become cheaper and more widely available, government agencies and private companies are rapidly deploying them to perform basic functions and increase productivity. Indeed, by one estimate, global spending on artificial intelligence, including software, hardware, and services, will reach $154 billion this year, and more than double that by 2026. As in other government and private-sector offices, election officials around the country already use AI to perform important but limited functions effectively. Most election offices, facing budget and staff constraints, will undoubtedly face substantial pressure to expand the use of AI to improve efficiency and service to voters, particularly as the rest of the world adopts... In the course of writing this resource, we spoke with several election officials who are currently using or considering how to integrate AI into their work. While a number of election officials were excited about the ways in which new AI capabilities could improve the functioning of their offices, most expressed concern that they didn’t have the proper tools to...

They have good reason to worry. Countless examples of faulty AI deployment in recent years illustrate how AI systems can exacerbate bias, “hallucinate” false information, and otherwise make mistakes that human supervisors fail to notice. Any office that works with AI should ensure that it does so with appropriate attention to quality, transparency, and consistency. These standards are especially vital for election offices, where accuracy and public trust are essential to preserving the health of our democracy and protecting the right to vote. In this resource, we examine how AI is already being used in election offices and how that use could evolve as the technology advances and becomes more widely available. We also offer election officials a set of preliminary recommendations for implementing safeguards for any deployed or planned AI systems ahead of the 2024 vote.

A checklist summarizing these recommendations appears at the end of this resource. As AI adoption expands across the election administration space, federal and state governments must develop certification standards and monitoring regimes for its use both in election offices and by vendors. President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence marks a pivotal first step, as it requires federal regulators to develop guidelines for AI... Under its recently announced artificial intelligence roadmap, CISA will provide guidance for secure and resilient AI development and deployment, alongside recommendations for mitigating AI-enabled threats to critical infrastructure. But this is only a start. It remains unclear how far the development of these guidelines will go and what election systems they will cover.

The recommendations in this resource are meant to assist election officials as they determine whether and how to integrate and use AI in election administration, whether before or after new federal guidelines are published... The explosive rise of artificial intelligence during the past two years coincides with widespread concerns about the security of American democracy. 2024 will bring the first presidential election of the generative AI era. Americans have many questions: will AI help quell the fears of another hotly debated election outcome, or will it fuel the fire? As generative AI produces output that is increasingly difficult to distinguish from human-created content, how will voters separate fact from misinformation? The Brennan Center for Justice and Georgetown University’s Center for Security and Emerging Technologies have convened experts to examine critical questions about AI.

They will explore how it might impact high-stakes areas like election security, voter suppression, election administration, and political advertising and fundraising. A panel ready to break down these complex topics. The conversation will address near-term risks of AI that could become critical in the 2024 election cycle and explore what steps the government, the private sector, and nonprofits should take to minimize the possible... Produced in partnership with Georgetown University’s Center for Security and Emerging Technology Benton Institute for Broadband & Society 1041 Ridge Rd, Unit 214 Wilmette, IL 60091 As more than 50 countries prepare for elections this year, artificial intelligence–generated media has begun to play a variety of roles in political campaigns, ranging from nefarious to innocuous to positive.

With six months until the U.S. general election, examining the use of AI in this year’s major global elections provides Americans with insights on what to expect in our own election and how election officials, legislators, and civil society should... Governments and civil society must work to fortify the electorate against such threats. Tactics include immediate actions such as publicizing corrective information and beefing up online safeguards and legislation, such as stronger curbs on deceptive online political advertising. In doing so, however, policymakers and advocates should keep in mind the various uses of AI in the political process and develop nuanced approaches that focus on the worst impacts without unduly limiting political... Earlier this year, for instance, AI-generated robocalls imitated President Biden’s voice, targeting New Hampshire voters and discouraging them from voting in the primary.

Earlier this year, an AI-generated image falsely depicting former president Trump with convicted sex trafficker Jeffrey Epstein and a young girl began circulating on Twitter.Meanwhile abroad, deepfakes circulated last year in the Slovakian election,... In January, the Chinese government apparently tried to deploy AI deepfakes to meddle in the Taiwanese election. And a wave of malicious AI-generated content is appearing in Britain ahead of its election, scheduled for July 4. One deepfake depicted a BBC newsreader, Sarah Campbell, falsely claiming that British Prime Minister Rishi Sunak promoted a scam investment platform. And as the Indian general election has gotten under way, deepfakes of popular deceased politicians appealing to voters as if they were still alive have become a popular campaign tactic.Sometimes, however, the use of... This raised some eyebrows given his role in the country’s military dictatorship, but there was no clear deception involved.

People Also Search

The Explosive Rise Of Artificial Intelligence During The Past Two

The explosive rise of artificial intelligence during the past two years coincides with widespread concerns about the security of American democracy. 2024 will bring the first presidential election of the generative AI era. Americans have many questions: will AI help quell the fears of another hotly debated election outcome, or will it fuel the fire? As generative AI produces output that is increas...

Join Us For A Live Event On Tuesday, November 28,

Join us for a live event on Tuesday, November 28, at 6 p.m. ET with a panel ready to break down these complex topics. The conversation will address near-term risks of AI that could become critical in the 2024 election cycle and explore what steps the government, the private sector, and nonprofits should take to minimize the possible... Produced in partnership with Georgetown University’s Center fo...

The Worst-case Scenarios — Such As AI-assisted Large-scale Disinformation Campaigns

The worst-case scenarios — such as AI-assisted large-scale disinformation campaigns and attacks on election infrastructure — did not come to pass. However, the rise of AI-generated deepfake videos, images, and audio misrepresenting political candidates and events is already influencing the information ecosystem. Over time, the misuse of these tools is eroding public trust in elections by making it...

Russian Operatives Created AI-generated Deepfakes Of Vice President Kamala Harris,

Russian operatives created AI-generated deepfakes of Vice President Kamala Harris, including a widely circulated video that falsely portrayed her as making inflammatory remarks, which was shared by tech billionaire Elon Musk on X. Separately, a former Palm Beach County deputy sheriff, now operating from Russia, collaborated in producing and disseminating fabricated videos, including one falsely ac...

While No Direct, Quantifiable Impact On Election Outcomes Has Been

While no direct, quantifiable impact on election outcomes has been identified, these incidents highlight the growing role of AI in shaping political discourse. The spread of deepfakes and automated disinformation can erode trust, reinforce political divisions, and influence voter perceptions. These dynamics, while difficult to measure, could have significant implications for democracy as AI-genera...