Project Muse How Ai Threatens Democracy
This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater...
Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics.
For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater... Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot, hit 100-million monthly users, making it the fastest-growing application in history. For context, it took the video-streaming service Netflix, now a household name, three-and-a-half years to reach one-million monthly users. But unlike Netflix, the meteoric rise of ChatGPT and its potential for good or ill sparked considerable debate.
Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose... Sarah Kreps is the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of the Tech Policy Institute at Cornell University. Doug Kriner is the Clinton Rossiter Professor in American Institutions in the Department of Government at Cornell University.
New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal, is not on the verge of becoming reality.3 Whether children or university... The employment consequences of generative AI will ultimately be difficult to adjudicate since economies are complex, making it difficult to isolate the net effect of AI-instigated job losses versus industry gains. Yet the potential consequences for democracy are immediate and severe. Generative AI threatens three central pillars of democratic governance: representation, accountability, and, ultimately, the most important currency in a political system—trust.
There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy1,2,3,4,5,6. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversation with an AI model that advocated... We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements7,8,9. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics. Examining the persuasion strategies9 used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques.
Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription Receive 51 print issues and online access
This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. In 2024, observers worldwide braced for the electoral impact of generative artificial intelligence (AI). With those contests over, attention should shift to the longer-term risks AI poses to democracy. This essay predicts three such risks. First, AI-backed efforts to replace political communication may erode representative democracy.
Second, AI may exacerbate trends toward the concentration of wealth and power, preserving only the facade of democracy. Third, economic trends in media and technology threaten to emaciate already weakened sources of trustworthy information. Avoiding these outcomes will require policymakers to reduce their reliance on the perspectives of industry professionals. The year 2024 opened with predictions that a surge of mis- and disinformation powered by artificial intelligence (AI) would soon be the greatest threat to global stability.1 Pundits claimed that months of "AI elections"... In fact, AI-powered influence efforts did appear. Chatbots imitated politicians online, and campaigns used cartoon avatars to rehabilitate their candidates' public image.
Female candidates became victims of nonconsensual AI-generated intimate imagery. Social media were flooded by fake newspapers filled with AI-generated "pink slime." Campaigns and political operatives unleashed AI robocallers and robotexters. Synthetic videos brought national leaders back from the dead to provide endorsements. Yet by January 2025, it was hard to say whether all this had added up to a bang or a whimper. Despite warnings of undetectable, AI-generated fake imagery designed to trick voters, events vindicated those who argued that the ability to quickly and cheaply proliferate content would not by itself significantly help propagandists.2 Generative AI... It seems likely that preexisting fears about social media and disinformation were being projected—often mistakenly or without empirical evidence—onto generative AI.
And yet, generative AI does appear to be spurring a slow [End Page 139] but steady shift in the behind-the-scenes creation and delivery of political communications. Today's electioneers are experimenting with using AI systems to augment their data analyses and message targeting.3 This could lead to more refined targeting (for instance, of swing voters) in years to come. Artificial intelligence is increasingly emerging as a key wedge issue — not between the major political parties, but within them. On the right, MAGA populists and influencers are warning about the potential hazards of unrestricted AI development as President Donald Trump, Vice President JD Vance and their administration have pushed for minimal regulations in... On the left, progressives are fighting against potential AI-fueled job losses and a further consolidation of financial power by Big Tech as center-left Democrats weigh the unknown downsides of technological advancement with major investments... Potential 2028 presidential contenders — from Vance and Missouri Sen.
Josh Hawley on the right, to California Gov. Gavin Newsom and New York Rep. Alexandria Ocasio-Cortez on the left — are all carving out unique lanes on the issue, creating some unusual bedfellows. Ocasio-Cortez is among the potential 2028 candidates who have highlighted growing concerns in recent weeks. Last month, she raised the potential for a market downturn fueled by what some are calling an AI bubble, warning at a congressional hearing of “2008-style threats to economic stability.” Over the past few years, Silicon Valley has steadily released a raft of computer programs with capabilities that seem ripped from the pages of science fiction.
In the new issue of the Journal of Democracy, we bring together leading thinkers and experts to explore the challenges that artificial intelligence poses, and how democratic institutions can be marshaled to help meet... AI and Catastrophic Risk AI with superhuman abilities could emerge within the next few years, and there is currently no guarantee that we will be able to control them. We must act now to protect democracy, human rights, and our very existence. By Yoshua Bengio How AI Threatens Democracy Generative AI can flood the media, internet, and even personal correspondence with misinformation—sowing confusion for voters and government officials alike. If we fail to act, mounting mistrust will polarize our societies and tear at our institutions.
By Sarah Kreps, Doug Kriner The Danger of Runaway AI Science fiction may soon become reality with the advent of AI systems that can independently pursue their own objectives. Guardrails are needed now to save us from the worst outcomes. By Tom Davidson The Authoritarian Data Problem AI is destined to become another stage for geopolitical conflict. In this contest, autocracies have the advantage, as they vacuum up valuable data from democracies, while democracies inevitably incorporate data tainted by repression.
By Eddie Yang, Margaret E. Roberts
People Also Search
- Project MUSE - How AI Threatens Democracy
- How AI Threatens Democracy - Journal of Democracy
- How AI Threatens Democracy - ResearchGate
- PDF How AI Threatens Democracy - Informační systém
- Full article: Artificial intelligence and democracy: pathway to ...
- Persuading voters using human-artificial intelligence dialogues
- Can Democracy Survive the Disruptive Power of AI?
- Project MUSE - AI's Real Dangers For Democracy
- AI becomes a political wedge issue, creating odd bedfellows across parties
- Will AI End Democracy? - Journal of Democracy
This Website Uses Cookies To Ensure You Get The Best
This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected o...
Project MUSE Promotes The Creation And Dissemination Of Essential Humanities
Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project ...
For Example, Asking A Chatbot How To Navigate A Complicated
For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay a...
Would Students Be Able To Use, Or Rather Misuse, The
Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose... Sarah Kreps is...
New Technologies Raise New Questions And Concerns Of Different Magnitudes
New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way...