How Malicious Ai Swarms Can Threaten Democracy

Bonisiwe Shabane
-
how malicious ai swarms can threaten democracy

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Advances in AI portend a new era of sophisticated disinformation operations.

While individual AI systems already create convincing—and at times misleading—information, an imminent development is the emergence of malicious AI swarms. These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro-suppression or mobilization, contamination of AI training data, and erosion of institutional trust. With increasing vulnerabilities in democratic processes worldwide, we urge a three-pronged response: (1) platform-side defenses—always-on swarm-detection dashboards, pre-election highfidelity swarm-simulation stress-tests, transparency audits, and optional client-side “AI shields” for users; (2) model-side safeguards—standardized persuasion-risk... Related articles are currently not available for this article. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics.

For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater... Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot, hit 100-million monthly users, making it the fastest-growing application in history. For context, it took the video-streaming service Netflix, now a household name, three-and-a-half years to reach one-million monthly users. But unlike Netflix, the meteoric rise of ChatGPT and its potential for good or ill sparked considerable debate.

Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose... Sarah Kreps is the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of the Tech Policy Institute at Cornell University. Doug Kriner is the Clinton Rossiter Professor in American Institutions in the Department of Government at Cornell University.

New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal, is not on the verge of becoming reality.3 Whether children or university... The employment consequences of generative AI will ultimately be difficult to adjudicate since economies are complex, making it difficult to isolate the net effect of AI-instigated job losses versus industry gains. Yet the potential consequences for democracy are immediate and severe. Generative AI threatens three central pillars of democratic governance: representation, accountability, and, ultimately, the most important currency in a political system—trust.

Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp. Please also note that this feature is work in progress and that it is still far from being perfect. That is, in particular,

JavaScript is requires in order to retrieve and display any references and citations for this record. references and citations temporaily disabled To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.

🚨 Our preprint “How Malicious AI Swarms Can Threaten Democracy” is now online! We show how coordinated multi-agent LLM swarms can: infiltrate communities and craft synthetic grassroots “consensus,” poison future training data and fragment our shared reality, erode institutional trust—and what policymakers, labs, and platforms can do... Grateful to have co-written this policy piece with an extraordinary, truly interdisciplinary team: Daniel Thilo Schroeder, Meeyoung (Mia) Cha, Andrea Baronchelli, Nick Bostrom, Nicholas Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina... 2️⃣ Model-side: persuasion-risk evaluations, provenance passkeys, robust watermarking. 3️⃣ System-level: a UN-backed AI Influence Observatory to provide global early-warning and incident certification. If you work on AI governance, platform integrity, or democratic resilience, we’d love your feedback.

There are some days / some research that has me regularly looking for the "God mode" off switch... Ellen McCarthy David Bray, PhD B Cavello Dave Fleet Robert Bair David V. Gioe, Ph.D. Claudia Chwalisz Deb Roy Ori Eisen Jim Routh Seth Kaplan Kara Revel Jarzynski Karen Murphy Celine Gounder, MD, ScM, FIDSA Emerson Brooking Doowan Lee Dennis Gleeson Sue Gordon Jake Shapiro Mark Little Marietje Schaake Thanks Matt! This is another great example of why we need to build trusted content ecosystems that combine content provenance, verification of genuine human activity, and reputational mechanisms to create economic value for human attention and...

Stu Rogers💡 , Ben Connable, PhD. Reading this report has deepened my concern about the future of democracy. The emergence of malicious AI swarms—capable of infiltrating communities, fabricating consensus, and evading detection—poses risks to public trust and electoral integrity that we cannot ignore. As AI-driven disinformation becomes more sophisticated, urgent action is needed to protect our democratic institutions before these threats become entrenched. Read more in our paper: "How we built the Torment Nexus from the famous sci-fi novel, 'Dont Build The Torment Nexus'" Since the explosion of generative artificial intelligence over the past two years, the technology has demeaned or defamed opponents and – for the first time, officials and experts said – begun to have an...

Free and easy to use, AI tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not say or appearing in places they were not –... The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal. In Romania, a Russian influence operation using AI tainted the first round of last year’s presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first big election in which AI played a decisive role in the outcome. It is unlikely to be the last.

As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Last week’s leak of the U.S. Department of Education’s proposed “Compact for Academic Excellence in Higher Education” drew intense reactions across academia. Critics call it government overreach threatening free expression, while supporters see a chance for reform and renewed trust between universities and policymakers. Danielle Allen, James Bryant Conant University Professor at Harvard University, director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation, weighs in.

Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. The phrase malicious AI swarms is no longer confined to research papers or speculative fiction.

It has entered our political vocabulary, our media headlines, and, increasingly, our everyday reality. Autonomous agents powered by generative artificial intelligence are learning to act not as isolated bots, but as coordinated swarms capable of overwhelming digital spaces, reshaping narratives, and destabilizing the very foundations of democracy. This cornerstone analysis examines what malicious AI swarms are, how they operate, why they target democracy, and what must be done to confront them. It is not enough to admire their danger. We must expose the illusion of control governments claim, dissect the failures of regulation, and call for radical action before democracy collapses under the weight of machine-driven chaos. In the early 2010s, disinformation was synonymous with troll farms in St.

Petersburg or click farms in Manila. The world saw coordinated human labor used to flood social media with propaganda. By the late 2010s, simple bots joined the fray: automated accounts spamming hashtags, sharing links, or amplifying conspiracies. But malicious AI swarms represent a leap. Instead of static bots, they are networks of autonomous agents capable of: Think of them not as robots repeating the same message, but as digital insects — an army of contextual, adaptive, and coordinated actors that overwhelm the environment.

People Also Search

ArXivLabs Is A Framework That Allows Collaborators To Develop And

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...

While Individual AI Systems Already Create Convincing—and At Times Misleading—information,

While individual AI systems already create convincing—and at times misleading—information, an imminent development is the emergence of malicious AI swarms. These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass ...

For Example, Asking A Chatbot How To Navigate A Complicated

For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay a...

Would Students Be Able To Use, Or Rather Misuse, The

Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose... Sarah Kreps is...

New Technologies Raise New Questions And Concerns Of Different Magnitudes

New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way...