The Impact Of Advanced Ai Systems On Democracy Pubmed
Advanced artificial intelligence (AI) systems capable of generating humanlike text and multimodal content are now widely available. Here we ask what impact this will have on the democratic process. We consider the consequences of AI for citizens' ability to make educated and competent choices about political representatives and issues (epistemic impacts). We explore how AI might be used to destabilize or support the mechanisms, including elections, by which democracy is implemented (material impacts). Finally, we discuss whether AI will strengthen or weaken the principles on which democracy is based (foundational impacts). The arrival of new AI systems clearly poses substantial challenges for democracy.
However, we argue that AI systems also offer new opportunities to educate and learn from citizens, strengthen public discourse, help people to find common ground, and reimagine how democracies might work better. Competing interests: The following authors are full- or part-time remunerated employees of commercial developers of AI technology: M. Bakker, I.G., N.M., M.H.T. and M. Botvinick (Google DeepMind), E.D. and D.G.
(Anthropic) and T.E. (OpenAI), A.P. (Fundamental AI Research (FAIR), Meta). C.S. and K.H. are part-time remunerated government employees (at the UK AI Security Institute).
D.S. and S.H. are employees of the non-profit organization Collective Intelligence Project. A.O. is an employee of the AI & Democracy Foundation. E.S.
is an employee of Demos. None of these employers had any role in the preparation of the manuscript or the decision to publish. The remaining authors declare no competing interests. Nature Human Behaviour (2025)Cite this article Advanced artificial intelligence (AI) systems capable of generating humanlike text and multimodal content are now widely available. Here we ask what impact this will have on the democratic process.
We consider the consequences of AI for citizens’ ability to make educated and competent choices about political representatives and issues (epistemic impacts). We explore how AI might be used to destabilize or support the mechanisms, including elections, by which democracy is implemented (material impacts). Finally, we discuss whether AI will strengthen or weaken the principles on which democracy is based (foundational impacts). The arrival of new AI systems clearly poses substantial challenges for democracy. However, we argue that AI systems also offer new opportunities to educate and learn from citizens, strengthen public discourse, help people to find common ground, and reimagine how democracies might work better. This is a preview of subscription content, access via your institution
Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information. Our research suggests their influence is already rippling through our democracy.
LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet. At the same time, they operate as black boxes, designed and trained in ways users can’t see.
Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence. A massive study of political persuasion shows AIs have, at best, a weak effect. Roughly two years ago, Sam Altman tweeted that AI systems would be capable of superhuman persuasion well before achieving general intelligence—a prediction that raised concerns about the influence AI could have over democratic elections. To see if conversational large language models can really sway political views of the public, scientists at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and many other institutions performed by far the...
It turned out political AI chatbots fell far short of superhuman persuasiveness, but the study raises some more nuanced issues about our interactions with AI. The public debate about the impact AI has on politics has largely revolved around notions drawn from dystopian sci-fi. Large language models have access to essentially every fact and story ever published about any issue or candidate. They have processed information from books on psychology, negotiations, and human manipulation. They can rely on absurdly high computing power in huge data centers worldwide. On top of that, they can often access tons of personal information about individual users thanks to hundreds upon hundreds of online interactions at their disposal.
Talking to a powerful AI system is basically interacting with an intelligence that knows everything about everything, as well as almost everything about you. When viewed this way, LLMs can indeed appear kind of scary. The goal of this new gargantuan AI persuasiveness study was to break such scary visions down into their constituent pieces and see if they actually hold water. All Tech Is Human has just released in inaugural Responsible AI Impact Report, which is a crucial roadmap of the most urgent risks, emerging safeguards, and public-interest solutions shaping how AI will impact society... All Tech Is Human’s Responsible AI Impact Report provides an assessment of the global Responsible AI (RAI) landscape, highlighting some of the most influential civil society contributions that shape AI governance, assurance, safety, and... It documents a RAI field in transition; from principles to practice, from voluntary guidelines to enforceable standards, and from private AI dominance toward emerging visions of Public AI designed for collective benefit.
The Responsible AI Impact Report outlines how the Responsible AI ecosystem has matured into a more coordinated, evidence-driven, and public-interest–oriented field. Civil society organizations play a central role in shaping regulatory debates, expanding global governance frameworks, and surfacing real-world harms across safety, security, privacy, fairness, labor, climate, and democratic integrity. The report emphasizes that frontier and agentic systems are amplifying existing risks while simultaneously exposing the inadequacy of current safeguards. The report examines areas such as biosecurity uplift and data poisoning to synthetic media manipulation, fraud, biased biometrics, and the potential psychological harms of AI companions. We highlight the community’s growing emphasis on rigorous AI assurance, including interoperable standards, lifecycle documentation, community-aligned benchmarks, independent audits, and participatory red teaming that reflects diverse global contexts. Our report looks to 2026 with clear priorities: strengthening rights-anchored regulation amid deregulatory pressures; scaling assurance practices that generate usable, audit-ready evidence; closing capacity gaps for nonprofits and public agencies; safeguarding information integrity; guiding...
Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay. Yet, these viral stories only show one side of the picture. Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding...
Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach... These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it. Nearly half of all participants of the workshop rated their understanding of AI as low. However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops. Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors.
Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind. With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights... For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions. AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes.
While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around... A conversation with a chatbot can shift people's political views—but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win.
But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing... The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. “One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study.
People Also Search
- The impact of advanced AI systems on democracy - PubMed
- The impact of advanced AI systems on democracy - Nature
- Artificial intelligence and democracy: pathway to progress or decline?
- AI Is Transforming Politics, Much Like Social Media Did - TIME
- The Role of AI in Democratic Systems: Implications for Privacy ...
- Researchers find what makes AI chatbots politically persuasive
- Artificial Intelligence and Democracy: A Conceptual Framework
- Responsible AI Impact Report: Urgent risks, emerging safeguards, and ...
- What Have we Learned About AI in Elections? - idea.int
- AI chatbots can sway voters better than political advertisements
Advanced Artificial Intelligence (AI) Systems Capable Of Generating Humanlike Text
Advanced artificial intelligence (AI) systems capable of generating humanlike text and multimodal content are now widely available. Here we ask what impact this will have on the democratic process. We consider the consequences of AI for citizens' ability to make educated and competent choices about political representatives and issues (epistemic impacts). We explore how AI might be used to destabi...
However, We Argue That AI Systems Also Offer New Opportunities
However, we argue that AI systems also offer new opportunities to educate and learn from citizens, strengthen public discourse, help people to find common ground, and reimagine how democracies might work better. Competing interests: The following authors are full- or part-time remunerated employees of commercial developers of AI technology: M. Bakker, I.G., N.M., M.H.T. and M. Botvinick (Google De...
(Anthropic) And T.E. (OpenAI), A.P. (Fundamental AI Research (FAIR), Meta).
(Anthropic) and T.E. (OpenAI), A.P. (Fundamental AI Research (FAIR), Meta). C.S. and K.H. are part-time remunerated government employees (at the UK AI Security Institute).
D.S. And S.H. Are Employees Of The Non-profit Organization Collective
D.S. and S.H. are employees of the non-profit organization Collective Intelligence Project. A.O. is an employee of the AI & Democracy Foundation. E.S.
Is An Employee Of Demos. None Of These Employers Had
is an employee of Demos. None of these employers had any role in the preparation of the manuscript or the decision to publish. The remaining authors declare no competing interests. Nature Human Behaviour (2025)Cite this article Advanced artificial intelligence (AI) systems capable of generating humanlike text and multimodal content are now widely available. Here we ask what impact this will have o...