Ai And Cyber Enabled Threats To Democracy Through Algorithmic
The increasing integration of artificial intelligence (AI) into digital platforms has escalated threats to democratic integrity worldwide, primarily through algorithmic manipulation, generative AI technologies, and large language models (LLMs). This study comprehensively investigates how these advanced technologies are systematically leveraged by state and non-state actors to destabilise democracies. The paper scrutinises empirical cases from the United States, European Union, India, Türkiye, Argentina, and Taiwan, analysing the operational mechanisms and socio-political implications of AI-driven disinformation. Findings demonstrate how generative AI, deepfake technologies, and sophisticated behavioural targeting exacerbate polarisation, weaken institutional trust, and distort electoral processes. Despite the growing prevalence of such cyber-enabled interference, regulatory and institutional responses remain fragmented and inadequate. Consequently, this research culminates in proposing a robust strategic implementation framework, emphasising platform transparency, regulatory innovation, technological safeguards, and civic resilience measures.
This work is licensed under a Creative Commons Attribution 4.0 International License. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Last week’s leak of the U.S. Department of Education’s proposed “Compact for Academic Excellence in Higher Education” drew intense reactions across academia. Critics call it government overreach threatening free expression, while supporters see a chance for reform and renewed trust between universities and policymakers. Danielle Allen, James Bryant Conant University Professor at Harvard University, director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation, weighs in.
Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Public engagement has long been too time-consuming and costly for governments to sustain, but AI offers tools to make participation more systematic and impactful. Our new Reboot Democracy Workshop Series replaces lectures with hands-on sessions that teach the practical “how-to’s” of AI-enhanced engagement. Together with leading practitioners and partners at InnovateUS and the Allen Lab at Harvard, we’ll explore how AI can help institutions tap the collective intelligence of our communities more efficiently and effectively. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics.
For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater... Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot, hit 100-million monthly users, making it the fastest-growing application in history. For context, it took the video-streaming service Netflix, now a household name, three-and-a-half years to reach one-million monthly users. But unlike Netflix, the meteoric rise of ChatGPT and its potential for good or ill sparked considerable debate.
Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose... Sarah Kreps is the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of the Tech Policy Institute at Cornell University. Doug Kriner is the Clinton Rossiter Professor in American Institutions in the Department of Government at Cornell University.
New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal, is not on the verge of becoming reality.3 Whether children or university... The employment consequences of generative AI will ultimately be difficult to adjudicate since economies are complex, making it difficult to isolate the net effect of AI-instigated job losses versus industry gains. Yet the potential consequences for democracy are immediate and severe. Generative AI threatens three central pillars of democratic governance: representation, accountability, and, ultimately, the most important currency in a political system—trust.
The rise of generative artificial intelligence (AI) is challenging governance paradigms, raising concerns about public trust, disinformation, and democratic resilience. While these technologies offer unprecedented efficiency and innovation, they also risk amplifying bias, eroding transparency, and centralizing power within proprietary platforms. This paper reframes algorithmic sovereignty as the democratic capacity to regulate and audit AI systems, ensuring they align with ethical, civic, and institutional norms. Using a mixed-methods approach—content analysis, expert interviews, and comparative policy review—we explore how regulatory frameworks in the EU, China, the U.S., and other regions address these challenges. By clarifying the scope of algorithmic governance and integrating counterarguments around disinformation and AI misuse, we develop a multilayered framework for human-centered AI oversight. We also examine geopolitical tensions shaping global digital sovereignty and propose actionable strategies to strengthen trust and civic participation.
Figures highlight regional governance effectiveness, trust dynamics, and regulatory orientations. We conclude that algorithmic sovereignty must evolve as an interdisciplinary and participatory governance goal that reinforces democracy rather than undermining it. This is a preview of subscription content, log in via an institution to check access. Price excludes VAT (USA) Tax calculation will be finalised during checkout. No datasets were generated or analysed during the current study. Ahmad, K., Rahman, M.: Supply chain resilience in SMEs: integration of generative AI in decision-making framework.
In: Proceedings of the 2024 International Conference on Machine Intelligence and Smart Innovation, pp. 123–134 (2024) This year, 49% of the global population in 64 countries will participate in elections. While conversations have circulated over how global policy will change based on the outcome of these elections, there is a parallel discussion taking place: how can citizens trust the outcome of elections in an... The growing threat of AI-enabled offensive cyber and information manipulation campaigns can destabilize electoral processes in ways nations have not previously experienced, by targeting information platforms and election infrastructure. That is why it’s crucial to address some of the ways in which AI models can be maliciously repurposed to compromise electoral integrity and propose preventative measures that government and industry can take.
While targeted information poisoning campaigns—involving adding or remorphing existing information into false or misleading information—and cyberattacks have been prevalent in the context of elections for many years, the poisoning of AI models serves as... As generative AI models are increasingly leveraged by bad actors for offensive cyber-operations and election interference, there is little evidence to suggest that these models are allowing for the creation of novel Techniques, Tools... Instead, GenAI models are amplifying bad actors’ speed and scale in cyber-operations, and in some cases, increasing the quality of their attack vectors, especially in the case of social engineering attacks. Below are some state-of-the-art capabilities that AI models are currently enhancing for bad actors in the threat landscape. AI models are making it easier for bad actors to carry out harmful information campaigns and cyberattacks on election systems. This issue was highlighted at the International Conference on Cybersecurity in January.
Cyber threat groups, regardless of their skill level, can leverage AI to generate malware and gain strategic advantage throughout different stages of an offensive cyber-operation. For example, a bad actor in Country X might aim to target a U.S. state’s voter registration database or IT infrastructure used to manage elections. This adversary can theoretically repurpose an LLM by training it on old malware, or by creating several copies of malware that have the same functionality but different source codes. Hackers on the dark web have already discovered ways to leverage ChatGPT to generate potential attack vectors. OpenAI recently announced that they discovered an account “knowingly violating [Application Programming Interfaces] (API) usage policies which disallows political campaigning, or impersonating individuals without consent”.
These AI-enabled exploits are capable of disrupting voting databases and electoral processes more broadly. Additionally, the tools required to create information poisoning campaigns are becoming more accessible. The production of AI-generated deepfakes has already undermined electoral processes by deterring citizens from voting or by twisting political narratives to influence votes. For instance, in the January 2024 U.S. primary elections, an AI-powered robocall impersonating President Biden targeted New Hampshire voters, prompting them to stay home instead of taking to the ballots. A similar instance of AI-enabled election interference was detected in Slovakia, where a fake AI-generated audio interview showcasing a top candidate claiming to have rigged the election went viral on social media.
People Also Search
- AI and Cyber-Enabled Threats to Democracy through Algorithmic ...
- Weaponized AI: A New Era of Threats and How We Can Counter It
- How AI Threatens Democracy - Journal of Democracy
- Artificial Intelligence: The Biggest Threat to Democracy Today?
- Algorithmic sovereignty and democratic resilience: rethinking AI ...
- Can Democracy Survive the Disruptive Power of AI?
- How Malicious Ai Swarms Can Threaten Democracy
- AI and Democracy in the Digital Age: Opportunities and Threats
- The Weaponization of Artificial Intelligence in Cybersecurity: A ...
- Election Interference in An Age of AI-Enabled Cyberattacks and ...
The Increasing Integration Of Artificial Intelligence (AI) Into Digital Platforms
The increasing integration of artificial intelligence (AI) into digital platforms has escalated threats to democratic integrity worldwide, primarily through algorithmic manipulation, generative AI technologies, and large language models (LLMs). This study comprehensively investigates how these advanced technologies are systematically leveraged by state and non-state actors to destabilise democraci...
This Work Is Licensed Under A Creative Commons Attribution 4.0
This work is licensed under a Creative Commons Attribution 4.0 International License. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action. Last week’s leak of the U.S. Department of Education’s proposed “Compact for Academic Excelle...
Amid Rising Illiberalism, Danielle Allen Urges A New Agenda To
Amid rising illiberalism, Danielle Allen urges a new agenda to renew democracy by reorienting institutions, policymaking, and civil society around the intentional sharing of power. Creating a healthy digital civic infrastructure ecosystem means not just deploying technology for the sake of efficiency, but thoughtfully designing tools built to enhance democratic engagement from connection to action...
For Example, Asking A Chatbot How To Navigate A Complicated
For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay a...
Would Students Be Able To Use, Or Rather Misuse, The
Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose... Sarah Kreps is...