Project Muse The Real Dangers Of Generative Ai
This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. As perhaps the most consequential technology of our time, Generative Foundation Models (GFMs) present unprecedented challenges for democratic institutions. By allowing deception and de-contextualized information sharing at a previously unimaginable scale and pace, GFMs could undermine the foundations of democracy. At the same time, the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power (both public and private). This essay examines the twin threats of collapse and singularity occasioned by the rise of GFMs.
Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project MUSE. Produced by Johns Hopkins University Press in collaboration with The Sheridan Libraries. As perhaps the most consequential technology of our time, Generative Foundation Models (GFMs) present unprecedented challenges for democratic institutions.
By allowing deception and de-contextualized information sharing at a previously unimaginable scale and pace, GFMs could undermine the foundations of democracy. At the same time, the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power (both public and private). This essay examines the twin threats of collapse and singularity occasioned by the rise of GFMs. Danielle Allen is James Bryant Conant University Professor at Harvard University and director of the Allen Lab for Democracy Renovation at Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation. E. Glen Weyl is research lead at Plural Technology Collaboratory and Microsoft Research Special Projects and Chair, Plurality Institute.
Artificial Intelligence, Digital technology, Surveillance Science fiction may soon become reality with the advent of AI systems that can independently pursue their own objectives. Guardrails are needed now to save us from the worst outcomes. Along with the benefits, generative AI raises concerns about misuse and errors. There are some limitations where legal frameworks have not caught up with technological developments. To mitigate these risks and ensure the technology benefits society, the OECD works with governments to enable policies that ensure the ethical and responsible use of generative AI.
When large language models, or textual generative AI, create incorrect yet convincing outputs, it is called a hallucination. This is unintentional and can happen if a correct answer is not found in the training data. Beyond perpetuating inaccurate information, this can interfere with the model’s ability to learn new skills and even lead to a loss of skills. While generative AI brings efficiencies to content creation, it also poses risks that must be considered carefully. One major concern is the potential for generating fake or misleading content. For example, generative AI can be used to create realistic-looking but entirely fabricated images or videos, which can be used to spread disinformation or deceive people.
This poses challenges for the detection and verification of digital media. Generative AI raises intellectual property rights issues, particularly concerning: Whether commercial entities can legally train ML models on copyrighted material is contested in Europe and the US. Several lawsuits were filed in the US against companies that allegedly trained their models on copyrighted data without authorisation to make and later store copies of the resulting images. These decisions will set legal precedents and impact the generative AI industry, from start-ups to multinational tech companies.
People Also Search
- Project MUSE - The Real Dangers of Generative AI
- The Real Dangers of Generative AI - Journal of Democracy
- The Real Dangers of Generative AI - Open University
- The Real Dangers of Generative AI - ResearchGate
- [PDF] The Real Dangers of Generative AI | Semantic Scholar
- The Real Dangers of Generative AI - ProQuest
- The dangers of generative artificial intelligence
- Project MUSE - The Real Dangers of Generative AI | Community Highlights ...
- Social Dangers of Generative Artificial Intelligence: Review and Guidelines
- Generative AI: the risks and the unknowns - OECD.AI
This Website Uses Cookies To Ensure You Get The Best
This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless. As perhaps the most consequential technology of our time, Generative Foundation Models (GFMs) present unprecedented challenges for democratic institutions. By allowing deception and de-contextualized information sharing at a previously unimaginable scale and pace, GF...
Project MUSE Promotes The Creation And Dissemination Of Essential Humanities
Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. Forged from a partnership between a university press and a library, Project MUSE is a trusted part of the academic and scholarly community it serves. 2715 North Charles StreetBaltimore, Maryland, USA 21218 ©2025 Project ...
By Allowing Deception And De-contextualized Information Sharing At A Previously
By allowing deception and de-contextualized information sharing at a previously unimaginable scale and pace, GFMs could undermine the foundations of democracy. At the same time, the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power (both public and private). This essay examines the t...
Artificial Intelligence, Digital Technology, Surveillance Science Fiction May Soon Become
Artificial Intelligence, Digital technology, Surveillance Science fiction may soon become reality with the advent of AI systems that can independently pursue their own objectives. Guardrails are needed now to save us from the worst outcomes. Along with the benefits, generative AI raises concerns about misuse and errors. There are some limitations where legal frameworks have not caught up with tech...
When Large Language Models, Or Textual Generative AI, Create Incorrect
When large language models, or textual generative AI, create incorrect yet convincing outputs, it is called a hallucination. This is unintentional and can happen if a correct answer is not found in the training data. Beyond perpetuating inaccurate information, this can interfere with the model’s ability to learn new skills and even lead to a loss of skills. While generative AI brings efficiencies ...