Ai S Real Dangers For Democracy Muse Jhu Edu

Bonisiwe Shabane
-
ai s real dangers for democracy muse jhu edu

In 2024, observers worldwide braced for the electoral impact of generative artificial intelligence (AI). With those contests over, attention should shift to the longer-term risks AI poses to democracy. This essay predicts three such risks. First, AI-backed efforts to replace political communication may erode representative democracy. Second, AI may exacerbate trends toward the concentration of wealth and power, preserving only the façade of democracy. Third, economic trends in media and technology threaten to emaciate already weakened sources of trustworthy information.

Avoiding these outcomes will require policymakers to reduce their reliance on the perspectives of industry professionals. Dean Jackson is a nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab and the principal of Public Circle, LLC, a research consultancy focused on democracy, technology, and media. Samuel C. Woolley is associate professor of communication and holds the William S. Dietrich II Endowed Chair in Disinformation Studies at the University of Pittsburgh. He is author of The Reality Game: How the Next Wave of Technology Will Break the Truth (2020).

Image Credit: Utku Ucrak/Anadolu via Getty Images Artificial Intelligence, Digital technology, Economic inequality This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. As perhaps the most consequential technology of our time, Generative Foundation Models (GFMs) present unprecedented challenges for democratic institutions. By allowing deception and de-contextualized information sharing at a previously unimaginable scale and pace, GFMs could undermine the foundations of democracy.

At the same time, the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power (both public and private). This essay examines the twin threats of collapse and singularity occasioned by the rise of GFMs. Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project MUSE.

Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust.

This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater... Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries.

As perhaps the most consequential technology of our time, Generative Foundation Models (GFMs) present unprecedented challenges for democratic institutions. By allowing deception and de-contextualized information sharing at a previously unimaginable scale and pace, GFMs could undermine the foundations of democracy. At the same time, the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power (both public and private). This essay examines the twin threats of collapse and singularity occasioned by the rise of GFMs. Danielle Allen is James Bryant Conant University Professor at Harvard University and director of the Allen Lab for Democracy Renovation at Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation. E.

Glen Weyl is research lead at Plural Technology Collaboratory and Microsoft Research Special Projects and Chair, Plurality Institute. Artificial Intelligence, Digital technology, Surveillance Science fiction may soon become reality with the advent of AI systems that can independently pursue their own objectives. Guardrails are needed now to save us from the worst outcomes. This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.

Cited in The New York Times and The Wall Street Journal, Journal of Democracy is an influential international forum for scholarly analysis and competing democratic viewpoints. Its articles have been widely reprinted in many languages. Focusing exclusively on democracy, the Journal monitors and analyzes democratic regimes and movements around the world. Each issue features a unique blend of scholarly analysis, reports from democratic activists, updates on news and elections, and reviews of important recent books. Copyright © National Endowment for Democracy and the Johns Hopkins University Press. Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide.

Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a chatbot how to navigate a complicated bureaucracy or to help draft a letter to an elected official could bolster civic engagement. However, that same technology—with its potential to produce disinformation and misinformation at scale—threatens to interfere with democratic representation, undermine democratic accountability, and corrode social and political trust. This essay analyzes the scope of the threat in each of these spheres and discusses potential guardrails for these misuses, including neural networks used to identify generated content, self-regulation by generative-AI platforms, and greater...

Just a month after its introduction, ChatGPT, the generative artificial intelligence (AI) chatbot, hit 100-million monthly users, making it the fastest-growing application in history. For context, it took the video-streaming service Netflix, now a household name, three-and-a-half years to reach one-million monthly users. But unlike Netflix, the meteoric rise of ChatGPT and its potential for good or ill sparked considerable debate. Would students be able to use, or rather misuse, the tool for research or writing? Would it put journalists and coders out of business? Would it “hijack democracy,” as one New York Times op-ed put it, by enabling mass, phony inputs to perhaps influence democratic representation?1 And most fundamentally (and apocalyptically), could advances in artificial intelligence actually pose...

Sarah Kreps is the John L. Wetherill Professor in the Department of Government, adjunct professor of law, and the director of the Tech Policy Institute at Cornell University. Doug Kriner is the Clinton Rossiter Professor in American Institutions in the Department of Government at Cornell University. New technologies raise new questions and concerns of different magnitudes and urgency. For example, the fear that generative AI—artificial intelligence capable of producing new content—poses an existential threat is neither plausibly imminent, nor necessarily plausible. Nick Bostrom’s paperclip scenario, in which a machine programmed to optimize paperclips eliminates everything standing in its way of achieving that goal, is not on the verge of becoming reality.3 Whether children or university...

The employment consequences of generative AI will ultimately be difficult to adjudicate since economies are complex, making it difficult to isolate the net effect of AI-instigated job losses versus industry gains. Yet the potential consequences for democracy are immediate and severe. Generative AI threatens three central pillars of democratic governance: representation, accountability, and, ultimately, the most important currency in a political system—trust. As artificial intelligence continues to advance at breakneck speed and world powers vie against each other in the AI arms race, democracies are searching for ways to control a technology that is transforming our... Read the following Journal of Democracy essays from leading AI experts on the dangers that lie ahead and how we might stave off a crisis. The Real Dangers of Generative AI Advanced AI faces twin perils: the collapse of democratic control over key state functions or the concentration of political and economic power in the hands of the few.

Avoiding these risks will require new ways of governing. Danielle Allen and E. Glen Weyl AI and Catastrophic Risk AI with superhuman abilities could emerge within the next few years, and there is currently no guarantee that we will be able to control them. We must act now to protect democracy, human rights, and our very existence. Yoshua Bengio

How AI Threatens Democracy Generative AI can flood the media, internet, and even personal correspondence, sowing confusion for voters and government officials alike. If we fail to act, mounting mistrust will polarize our societies and tear at our institutions. Sarah Kreps and Doug Kriner Since OpenAI’s release of the very large language models Chat-GPT and GPT-4, the potential dangers of AI have garnered widespread public attention. In this essay, the author reviews the threats to democracy posed by the possibility of “rogue AIs,” dangerous and powerful AIs that would execute harmful goals, irrespective of whether the outcomes are intended by... To mitigate against the risk that rogue AIs present to democracy and geopolitical stability, the author argues that research into safe and defensive AIs should be conducted by a multilateral, international network of research...

How should we think about the advent of formidable and even superhuman artificial intelligence (AI) systems? Should we embrace them for their potential to enhance and improve our lives or fear them for their potential to disempower and possibly even drive humanity to extinction? In 2023, these once-marginal questions captured the attention of media, governments, and everyday citizens after OpenAI released ChatGPT and then GPT-4, stirring a whirlwind of controversy and leading the Future of Life Institute to... Two months later, Geoffrey Hinton and I, who together with Yann Le Cun won the 2018 Turing Award for our seminal contributions to deep learning, joined CEOs of AI labs, top scientists, and many... Yoshua Bengio is professor of computer science at the Université de Montréal, founder and scientific director of Mila–Quebec Artificial Intelligence Institute, and senior fellow and codirector of the Learning in Machines and Brains program... He won the 2018 A.M.

Turing Award (with Geoffrey Hinton and Yann LeCun). This disagreement reflects a spectrum of views among AI researchers about the potential dangers of advanced AI. What should we make of the diverging opinions? At a minimum, they signal great uncertainty. Given the high stakes, this is reason enough for ramping up research to better understand the possible risks. And while experts and stakeholders often talk about these risks in terms of trajectories, probabilities, and the potential impact on society, we must also consider some of the underlying motivations at play, such as...

People Also Search

In 2024, Observers Worldwide Braced For The Electoral Impact Of

In 2024, observers worldwide braced for the electoral impact of generative artificial intelligence (AI). With those contests over, attention should shift to the longer-term risks AI poses to democracy. This essay predicts three such risks. First, AI-backed efforts to replace political communication may erode representative democracy. Second, AI may exacerbate trends toward the concentration of wea...

Avoiding These Outcomes Will Require Policymakers To Reduce Their Reliance

Avoiding these outcomes will require policymakers to reduce their reliance on the perspectives of industry professionals. Dean Jackson is a nonresident fellow at the Atlantic Council’s Digital Forensic Research Lab and the principal of Public Circle, LLC, a research consultancy focused on democracy, technology, and media. Samuel C. Woolley is associate professor of communication and holds the Will...

Image Credit: Utku Ucrak/Anadolu Via Getty Images Artificial Intelligence, Digital

Image Credit: Utku Ucrak/Anadolu via Getty Images Artificial Intelligence, Digital technology, Economic inequality This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. As perhaps the most consequential technology of our time, Generative Foundation Models (GFMs) present unprecedented challenges for democratic institutio...

At The Same Time, The Investment Scale Required To Develop

At the same time, the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power (both public and private). This essay examines the twin threats of collapse and singularity occasioned by the rise of GFMs. Project MUSE promotes the creation and dissemination of essential humanities and social ...

Produced By Johns Hopkins University Press In Collaboration With The

Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. The explosive rise of generative AI is already transforming journalism, finance, and medicine, but it could also have a disruptive influence on politics. For example, asking a ch...