Ai S Impact On Elections New Policy Paper Highlights Urgent Global

Bonisiwe Shabane
-
ai s impact on elections new policy paper highlights urgent global

AI’s Impact on Elections: New Policy Paper Highlights Urgent Global Challenge Future Shift Labs Co-founder Sagar Vishnoi addresses IPE25 in Cape Town (Photo: Business Wire) CAPE TOWN, South Africa--(BUSINESS WIRE)--Artificial intelligence (AI) is transforming political campaigns worldwide, creating unprecedented opportunities while amplifying risks for democratic processes. According to the newly launched policy paper, The Pervasive Influence of AI on Global Political Campaigns 2024, AI-driven techniques, such as generative AI (genAI), have revolutionized voter engagement through personalized messaging. However, genAI has also emerged as a double-edged sword: while enabling effective campaigning, it has been a significant source of disinformation, eroding trust in democratic institutions. For instance, the United States, classified as “severely polarized” and ranked 3rd among 28 countries for polarization, illustrates how AI-generated propaganda exacerbates societal divisions.

Further, the U.S. ranks 1st in distrust of social media, exposing vulnerabilities to AI-driven disinformation campaigns. Russia’s Foreign Influence and Malign Interference (FIMI) activities have prominently leveraged AI tools to spread targeted propaganda. Generative AI platforms like "Doppelganger" have repeatedly been used to sow disinformation and undermine public trust globally. The study underscores the urgent need for governments to regulate AI in elections to prevent future misuse and safeguard democratic integrity. The policy paper, authored by Alisha Butala, Dr.

Christopher Nehring, and Mateusz Łabuz, was developed by Future Shift Labs, a global think tank exploring AI and governance. Officially unveiled on the 23rd of January at the IPE Campaign Expo 2025 in Cape Town, South Africa, the paper provides actionable insights and global case studies. It emphasizes the importance of clear regulations, ethical standards, and investment in public education to combat AI-enabled electoral interference. Cape Town [South Africa], January 25: Artificial intelligence (AI) is transforming political campaigns worldwide, creating unprecedented opportunities while amplifying risks for democratic processes. According to the newly launched policy paper, The Pervasive Influence of AI on Global Political Campaigns 2024, AI-driven techniques, such as generative AI (genAI), have revolutionized voter engagement through personalized messaging. However, genAI has also emerged as a double-edged sword: while enabling effective campaigning, it has been a significant source of disinformation, eroding trust in democratic institutions.

For instance, the United States, classified as “severely polarized” and ranked 3rd among 28 countries for polarization, illustrates how AI-generated propaganda exacerbates societal divisions. Further, the U.S. ranks 1st in distrust of social media, exposing vulnerabilities to AI-driven disinformation campaigns. Russia’s Foreign Influence and Malign Interference (FIMI) activities have prominently leveraged AI tools to spread targeted propaganda. Generative AI platforms like “Doppelganger” have repeatedly been used to sow disinformation and undermine public trust globally. The study underscores the urgent need for governments to regulate AI in elections to prevent future misuse and safeguard democratic integrity.

The policy paper, authored by Alisha Butala, Dr. Christopher Nehring, and Mateusz Labuz, was developed by Future Shift Labs, a global think tank exploring AI and governance. Officially unveiled on the 23rd of January at the IPE Campaign Expo 2025 in Cape Town, South Africa, the paper provides actionable insights and global case studies. It emphasizes the importance of clear regulations, ethical standards, and investment in public education to combat AI-enabled electoral interference. Nitin Narang, Founder of Future Shift labs who played a pivotal role in bringing this project to fruition, remarked: “At its core, this research is about people – voters, citizens, and communities. Our team’s work is driven by a shared commitment to understanding how AI is reshaping our democratic landscape.

By shedding light on these critical issues, we hope to contribute to a more informed, inclusive, and resilient global conversation.” Dr. Israel Govender, a thought leader in technology governance, added: “As we reflect on AI’s rapid evolution, our choices today will shape the future of democracy. This research provides a critical perspective on the impact of technology on our values and institutions, serving as a valuable resource for responsible innovation.” AI Chatbots Are Shockingly Good at Political Persuasion Chatbots can measurably sway voters’ choices, new research shows.

The findings raise urgent questions about AI’s role in future elections By Deni Ellis Béchard edited by Claire Cameron Stickers sit on a table during in-person absentee voting on November 01, 2024 in Little Chute, Wisconsin. Election day is Tuesday November 5. Forget door knocks and phone banks—chatbots could be the future of persuasive political campaigns. The Brookings Institution, Washington District of Columbia

Melanie W. Sisson, Colin Kahl, Sun Chenghao, Xiao Qian The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy.

LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see.

Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence. Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay. Yet, these viral stories only show one side of the picture.

Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding... Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it. Nearly half of all participants of the workshop rated their understanding of AI as low.

However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops. Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors. Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind. With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights...

For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes. While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around... Chiara Vargiu is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands. Alessandro Nai is in the Amsterdam School of Communication Research (ASCoR), University of Amsterdam, 1018 WV Amsterdam, the Netherlands.

The world is getting used to ‘talking’ to machines. Technology that just months ago seemed improbable or marginal has erupted quickly into the everyday lives of millions, perhaps billions, of people. Generative conversational artificial-intelligence systems, such as OpenAI’s ChatGPT, are being used to optimize tasks, plan holidays and seek advice on matters ranging from the trivial to the existential — a quiet exchange of words... Against this backdrop, the urgent question is: can the same conversational skills that make AI into helpful assistants also turn them into powerful political actors? In a pair of studies2,3 in Nature and Science, researchers show that dialogues with large language models (LLMs) can shift people’s attitudes towards political candidates and policy issues. The researchers also identify which features of conversational AI systems make them persuasive, and what risks they might pose for democracy.

Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began.

Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections.

“One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says.

People Also Search

AI’s Impact On Elections: New Policy Paper Highlights Urgent Global

AI’s Impact on Elections: New Policy Paper Highlights Urgent Global Challenge Future Shift Labs Co-founder Sagar Vishnoi addresses IPE25 in Cape Town (Photo: Business Wire) CAPE TOWN, South Africa--(BUSINESS WIRE)--Artificial intelligence (AI) is transforming political campaigns worldwide, creating unprecedented opportunities while amplifying risks for democratic processes. According to the newly ...

Further, The U.S. Ranks 1st In Distrust Of Social Media,

Further, the U.S. ranks 1st in distrust of social media, exposing vulnerabilities to AI-driven disinformation campaigns. Russia’s Foreign Influence and Malign Interference (FIMI) activities have prominently leveraged AI tools to spread targeted propaganda. Generative AI platforms like "Doppelganger" have repeatedly been used to sow disinformation and undermine public trust globally. The study unde...

Christopher Nehring, And Mateusz Łabuz, Was Developed By Future Shift

Christopher Nehring, and Mateusz Łabuz, was developed by Future Shift Labs, a global think tank exploring AI and governance. Officially unveiled on the 23rd of January at the IPE Campaign Expo 2025 in Cape Town, South Africa, the paper provides actionable insights and global case studies. It emphasizes the importance of clear regulations, ethical standards, and investment in public education to co...

For Instance, The United States, Classified As “severely Polarized” And

For instance, the United States, classified as “severely polarized” and ranked 3rd among 28 countries for polarization, illustrates how AI-generated propaganda exacerbates societal divisions. Further, the U.S. ranks 1st in distrust of social media, exposing vulnerabilities to AI-driven disinformation campaigns. Russia’s Foreign Influence and Malign Interference (FIMI) activities have prominently l...

The Policy Paper, Authored By Alisha Butala, Dr. Christopher Nehring,

The policy paper, authored by Alisha Butala, Dr. Christopher Nehring, and Mateusz Labuz, was developed by Future Shift Labs, a global think tank exploring AI and governance. Officially unveiled on the 23rd of January at the IPE Campaign Expo 2025 in Cape Town, South Africa, the paper provides actionable insights and global case studies. It emphasizes the importance of clear regulations, ethical st...