How Malicious Ai Swarms Can Threaten Democracy Semantic Scholar

Bonisiwe Shabane
-
how malicious ai swarms can threaten democracy semantic scholar

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net.

If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp. Please also note that this feature is work in progress and that it is still far from being perfect. That is, in particular, JavaScript is requires in order to retrieve and display any references and citations for this record.

references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q. This website uses cookies to ensure you get the best experience on our website.

Without cookies your experience may not be seamless. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater... Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide.

Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. Advances in AI portend a new era of sophisticated disinformation operations. While individual AI systems already create convincing—and at times misleading—information, an imminent development is the emergence of malicious AI swarms.

These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro-suppression or mobilization, contamination of AI training data, and erosion of institutional trust. With increasing vulnerabilities in democratic processes worldwide, we urge a three-pronged response: (1) platform-side defenses—always-on swarm-detection dashboards, pre-election highfidelity swarm-simulation stress-tests, transparency audits, and optional client-side “AI shields” for users; (2) model-side safeguards—standardized persuasion-risk... Related articles are currently not available for this article. 🚨 Our preprint “How Malicious AI Swarms Can Threaten Democracy” is now online! We show how coordinated multi-agent LLM swarms can: infiltrate communities and craft synthetic grassroots “consensus,” poison future training data and fragment our shared reality, erode institutional trust—and what policymakers, labs, and platforms can do...

Grateful to have co-written this policy piece with an extraordinary, truly interdisciplinary team: Daniel Thilo Schroeder, Meeyoung (Mia) Cha, Andrea Baronchelli, Nick Bostrom, Nicholas Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina... 2️⃣ Model-side: persuasion-risk evaluations, provenance passkeys, robust watermarking. 3️⃣ System-level: a UN-backed AI Influence Observatory to provide global early-warning and incident certification. If you work on AI governance, platform integrity, or democratic resilience, we’d love your feedback. There are some days / some research that has me regularly looking for the "God mode" off switch... Ellen McCarthy David Bray, PhD B Cavello Dave Fleet Robert Bair David V.

Gioe, Ph.D. Claudia Chwalisz Deb Roy Ori Eisen Jim Routh Seth Kaplan Kara Revel Jarzynski Karen Murphy Celine Gounder, MD, ScM, FIDSA Emerson Brooking Doowan Lee Dennis Gleeson Sue Gordon Jake Shapiro Mark Little Marietje Schaake Thanks Matt! This is another great example of why we need to build trusted content ecosystems that combine content provenance, verification of genuine human activity, and reputational mechanisms to create economic value for human attention and... Stu Rogers💡 , Ben Connable, PhD. Reading this report has deepened my concern about the future of democracy.

The emergence of malicious AI swarms—capable of infiltrating communities, fabricating consensus, and evading detection—poses risks to public trust and electoral integrity that we cannot ignore. As AI-driven disinformation becomes more sophisticated, urgent action is needed to protect our democratic institutions before these threats become entrenched. Read more in our paper: "How we built the Torment Nexus from the famous sci-fi novel, 'Dont Build The Torment Nexus'"

People Also Search

ArXivLabs Is A Framework That Allows Collaborators To Develop And

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...

If Citation Data Of Your Publications Is Not Openly Available

If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp. Please also note that this feature is work in progress and that i...

References And Citations Temporaily Disabled To Protect Your Privacy, All

references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q. This website uses cookies to ensure you get the best experience on our website.

Without Cookies Your Experience May Not Be Seamless. The Explosive

Without cookies your experience may not be seamless. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its p...

Forged From A Partnership Between A University Press And A

Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. Advances in AI portend a new era of sophisticated disinformation operations. While in...