How Malicious Ai Swarms Can Threaten Democracy The Fusion Of Agentic

Bonisiwe Shabane
-
how malicious ai swarms can threaten democracy the fusion of agentic

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Malicious AI swarms are not a sci-fi fantasy.

They are here, evolving, and learning to destabilize democracies faster than regulators can even define the term. As a journalist, I have watched governments stumble, corporations lie, and citizens drown in propaganda. This is not just about fake news. It is about the weaponization of intelligence at scale, and it threatens the very concept of free elections. Once upon a time, online manipulation meant a troll farm in St. Petersburg or a click farm in Manila.

Now, a single operator can unleash thousands of autonomous agents. These malicious AI swarms coordinate like insects — overwhelming fact-checkers, hijacking trends, and manufacturing outrage in real time. Unlike traditional bots, these agents are adaptive. They read context, generate convincing text, mimic human errors, and even argue with each other to appear authentic. The mainstream narrative still treats disinformation as a nuisance. But this is no longer noise; it is an attack vector.

Democracy depends on a fragile foundation: trust. Citizens must trust that votes matter, that information is reliable, and that debate is grounded in reality. Malicious AI swarms attack all three pillars. In this sense, swarms are not just tools of disinformation. They are weapons designed to collapse the very logic of democratic society. 🚨 Our preprint “How Malicious AI Swarms Can Threaten Democracy” is now online!

We show how coordinated multi-agent LLM swarms can: infiltrate communities and craft synthetic grassroots “consensus,” poison future training data and fragment our shared reality, erode institutional trust—and what policymakers, labs, and platforms can do... Grateful to have co-written this policy piece with an extraordinary, truly interdisciplinary team: Daniel Thilo Schroeder, Meeyoung (Mia) Cha, Andrea Baronchelli, Nick Bostrom, Nicholas Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina... 2️⃣ Model-side: persuasion-risk evaluations, provenance passkeys, robust watermarking. 3️⃣ System-level: a UN-backed AI Influence Observatory to provide global early-warning and incident certification. If you work on AI governance, platform integrity, or democratic resilience, we’d love your feedback. There are some days / some research that has me regularly looking for the "God mode" off switch...

Ellen McCarthy David Bray, PhD B Cavello Dave Fleet Robert Bair David V. Gioe, Ph.D. Claudia Chwalisz Deb Roy Ori Eisen Jim Routh Seth Kaplan Kara Revel Jarzynski Karen Murphy Celine Gounder, MD, ScM, FIDSA Emerson Brooking Doowan Lee Dennis Gleeson Sue Gordon Jake Shapiro Mark Little Marietje Schaake Thanks Matt! This is another great example of why we need to build trusted content ecosystems that combine content provenance, verification of genuine human activity, and reputational mechanisms to create economic value for human attention and... Stu Rogers💡 , Ben Connable, PhD.

Reading this report has deepened my concern about the future of democracy. The emergence of malicious AI swarms—capable of infiltrating communities, fabricating consensus, and evading detection—poses risks to public trust and electoral integrity that we cannot ignore. As AI-driven disinformation becomes more sophisticated, urgent action is needed to protect our democratic institutions before these threats become entrenched. Read more in our paper: "How we built the Torment Nexus from the famous sci-fi novel, 'Dont Build The Torment Nexus'" This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.

The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater... Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves.

2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Last week’s leak of the U.S. Department of Education’s proposed “Compact for Academic Excellence in Higher Education” drew intense reactions across academia.

Critics call it government overreach threatening free expression, while supporters see a chance for reform and renewed trust between universities and policymakers. Danielle Allen, James Bryant Conant University Professor at Harvard University, director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation, weighs in. Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement.

Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp. Please also note that this feature is work in progress and that it is still far from being perfect.

That is, in particular, JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser.

For more information see our F.A.Q.

People Also Search

ArXivLabs Is A Framework That Allows Collaborators To Develop And

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...

They Are Here, Evolving, And Learning To Destabilize Democracies Faster

They are here, evolving, and learning to destabilize democracies faster than regulators can even define the term. As a journalist, I have watched governments stumble, corporations lie, and citizens drown in propaganda. This is not just about fake news. It is about the weaponization of intelligence at scale, and it threatens the very concept of free elections. Once upon a time, online manipulation ...

Now, A Single Operator Can Unleash Thousands Of Autonomous Agents.

Now, a single operator can unleash thousands of autonomous agents. These malicious AI swarms coordinate like insects — overwhelming fact-checkers, hijacking trends, and manufacturing outrage in real time. Unlike traditional bots, these agents are adaptive. They read context, generate convincing text, mimic human errors, and even argue with each other to appear authentic. The mainstream narrative s...

Democracy Depends On A Fragile Foundation: Trust. Citizens Must Trust

Democracy depends on a fragile foundation: trust. Citizens must trust that votes matter, that information is reliable, and that debate is grounded in reality. Malicious AI swarms attack all three pillars. In this sense, swarms are not just tools of disinformation. They are weapons designed to collapse the very logic of democratic society. 🚨 Our preprint “How Malicious AI Swarms Can Threaten Democ...

We Show How Coordinated Multi-agent LLM Swarms Can: Infiltrate Communities

We show how coordinated multi-agent LLM swarms can: infiltrate communities and craft synthetic grassroots “consensus,” poison future training data and fragment our shared reality, erode institutional trust—and what policymakers, labs, and platforms can do... Grateful to have co-written this policy piece with an extraordinary, truly interdisciplinary team: Daniel Thilo Schroeder, Meeyoung (Mia) Cha...