Ai Elections Reports Ai Elections Initiative

Bonisiwe Shabane
-
ai elections reports ai elections initiative

Reports from other organizations on the risks that advancements in AI pose to free and fair elections, including some proposed mitigations. This report gives a high level overview of the impacts AI good have on US democracy. – Norman Eisen, Nicol Turner Lee, Colby Galliher, and Jonathan Katz This report describes the risk of AI-backed voter suppression, and details some potential solutions. This report offers specific ways in which election officials can prepare for the impacts of AI. One year ago this week, 27 artificial intelligence companies and social media platforms signed an accord that highlighted how AI-generated disinformation could undermine elections around the world.

The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes.” The signatories agreed to eight commitments to mitigate the risks that generative AI poses to elections. Companies pledged to: This analysis assesses how the companies followed through on their commitments, based on their own reporting. At the time the accord was signed, the companies involved received positive attention for promising to act to ensure that their products would not interfere with elections.

While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing. Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >> Companies had multiple opportunities to report on their progress over the past year, including through updates on the accord’s official website, responses to a formal inquiry from then-Senate Intelligence Committee Chair Mark Warner (D-VA),... As 2024 saw the largest number of global elections since the advent of the internet, CDT’s Elections and Democracy team responded to cybersecurity and information integrity challenges multiplied by the rise of generative AI. We took a whole-of-society approach, collaborating with stakeholders across the public and private sectors. Recognizing the threats posed by AI-fueled misinformation and cyber attacks, we worked closely with state and local election officials—developing and hosting virtual trainings, as well as participating in tabletop exercises to help protect their...

To support policymakers and regulators, we provided feedback on federal and state legislation and rulemaking proposals concerning AI in elections. At the same time, we played a key role in coordinating the broader civil society response, leading a working group on AI and elections to foster greater impact. Through advocacy, advice, and original research, we called on tech companies to take meaningful steps to help their users access trusted information and combat malign influence campaigns on their platforms. Meanwhile, to strengthen public resilience, we engaged directly with communities—hosting virtual trainings for AARP community leaders and members nationwide, publishing blogs and op-eds highlighting misinformation threats voters should take into account, and sharing expertise... Our research buttressed these efforts. In an original report with Proof News, we revealed how five major AI chatbots returned misleading information about the accessibility of voting for voters with disabilities, and we advocated directly to companies to fix...

We published comprehensive recommendations for how AI developers can help protect election integrity, and engaged with over a dozen companies to build on the commitments they made in the 2024 AI Elections Accord. In another report, we analyzed how social media companies changed their political advertising policies between 2020 and 2024. We also expanded our work into the international arena, examining how civil society responded to the rise of generative AI in a report that drew on case studies from Taiwan, Mexico, and incident-tracking projects... The Role of Generative AI Use in 2024 Elections Worldwide A high-level précis of the Technical Paper can be found in the Summary for Policymakers report, Generative AI in Electoral Campaigns: Mapping Global Patterns. GenAI is being deployed in many ways during elections, ranging from the creation of deepfake video and audio messages, to sophisticated voter targeting.

What are the implications of GenAI for election administration and voter participation around the world? This assessment delivers the first global, data-driven analysis of its kind, designed to inform policy recommendations that enhance election administration, foster trust in electoral processes, and boost voter turnout. Based on an analysis of an original data set of 215 incidents, covering all 50 countries holding competitive national elections in 2024, we find that: The International Panel on the Information Environment (IPIE) is an independent and global science organization providing scientific knowledge about the health of the world's information environment. Based in Switzerland, the IPIE offers policymakers, industry, and civil society actionable scientific assessments about threats to the information environment, including AI bias, algorithmic manipulation, and disinformation. The IPIE is the only scientific body systematically organizing, evaluating, and elevating research with the broad aim of improving the global information environment.

Hundreds of researchers worldwide contribute to the IPIE's reports. Instead of replacing expertise, artificial intelligence can amplify the effectiveness of skilled workers. The “first AI elections” in the United States will be held against the backdrop of unprecedented distrust in civic institutions, the political system, and traditional media. Rapid advancements in artificial intelligence (AI) mean that voters face a likely future of compelling deepfakes, highly targeted fraudulent messages, and compounding cybersecurity threats. While much about the 2024 elections is uncertain, we know bad actors will try to manipulate public opinion and to sway voter behavior at key moments before polls open. Our country needs concerted leadership and strategic focus at the intersection of AI, elections, and social trust.

Aspen Digital, a program of the Aspen Institute, is launching the AI Elections Initiative as an ambitious new effort to strengthen US election resilience in the face of generative AI. Election officials, policymakers, the private sector–including tech leaders and experts–and the news media must do their part in securing this cornerstone of American democracy.But these groups are not sufficiently communicating with each other. Operating in silos, they won’t be as effective. That’s why we believe it’s vital to bring experts together to better understand and learn from one another. In the coming weeks, we will begin posting details about our effort to convene essential parties and publish action-oriented resources. This work will be supported by an advisory council composed of cross-sectoral experts who will help enlighten and enhance our work.A sampling of the AI Elections Initiative’s events in the first quarter includes:

We know election preparedness is a whole-of-society challenge. Our approach has been informed by interviews with more than 60 experts across the tech industry, elections administration, media, civil society, and academia – all of whom identified critical risks and helped us chart... Effective preparedness will require leaders across sectors to anticipate and mitigate threats from: All voters have a stake in election preparedness, especially those in communities historically targeted for election interference based on race. We expect bad actors will continue their long-standing practice of adapting new technologies to exploit rifts within American society in an effort to undermine confidence in democratic values by targeting voters in swing districts,... We will combat efforts to degrade our elections by empowering voters and thought leaders at this critical time.We believe technology can enable a positive future, one where AI promotes civic engagement, reinforces democratic values,...

That future is more likely if cross-sector leaders work together at the outset to combat the misuse of AI in civic life. The AI Elections Initiative and the broader team at Aspen Digital are excited to support the community of leaders working to rebuild social trust and to ensure civic participation remains a touchstone of American... Want to stay current on Aspen Digital’s work on AI elections and more? Sign up for our email list. The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility.

Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions.

These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence. Aspen Digital is supporting informed civic participation and social trust in the face of fast-evolving AI tools.

We’re building election resilience for the era of generative AI. Through action-oriented convenings and resources, the AI Elections Initiative is empowering those who administer elections, make policy, lead tech companies, and shape the information environment. So that they can fulfill their essential roles in strengthening elections, the cornerstone of American democracy. Sign up for the AI Elections Newsletter. Generative AI (GenAI) has emerged as a transformative force in elections playing out across the world. In a series of reports, the Center for Media Engagement investigates GenAI’s role before, during, and after several key global elections in 2024.

The reports examine the potential impacts of GenAI on key democratic processes in the U.S., Europe, India, Mexico, and South Africa. These insights are critical to groups working to sustain and advance democracies in the face of constant transformation of the digital environment and associated communication processes. Below we share the emerging trends developing around elections and AI in each of these regions. To view the region’s report in full, click on the link in the title. The U.S.: GenAI, Disinformation, and Data Rights in U.S. Elections

Europe: Political Deepfakes and Misleading Chatbots – Understanding the Use of GenAI in Recent European Elections

People Also Search

Reports From Other Organizations On The Risks That Advancements In

Reports from other organizations on the risks that advancements in AI pose to free and fair elections, including some proposed mitigations. This report gives a high level overview of the impacts AI good have on US democracy. – Norman Eisen, Nicol Turner Lee, Colby Galliher, and Jonathan Katz This report describes the risk of AI-backed voter suppression, and details some potential solutions. This r...

The Signers At A Security Conference In Munich Included Google,

The signers at a security conference in Munich included Google, Meta, Microsoft, OpenAI, and TikTok. They acknowledged the dangers, stating, “The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes.” The signatories agreed to eight commitments to mitigate the risks that generati...

While The Brennan Center, Too, Praised These Companies For The

While the Brennan Center, too, praised these companies for the accord, we also asked how the public should gauge whether the commitments were anything more than PR window-dressing. Read the Brennan Center’s Agenda to Strengthen Democracy in the Age of AI >> Companies had multiple opportunities to report on their progress over the past year, including through updates on the accord’s official websit...

To Support Policymakers And Regulators, We Provided Feedback On Federal

To support policymakers and regulators, we provided feedback on federal and state legislation and rulemaking proposals concerning AI in elections. At the same time, we played a key role in coordinating the broader civil society response, leading a working group on AI and elections to foster greater impact. Through advocacy, advice, and original research, we called on tech companies to take meaning...

We Published Comprehensive Recommendations For How AI Developers Can Help

We published comprehensive recommendations for how AI developers can help protect election integrity, and engaged with over a dozen companies to build on the commitments they made in the 2024 AI Elections Accord. In another report, we analyzed how social media companies changed their political advertising policies between 2020 and 2024. We also expanded our work into the international arena, exami...