U S Legislative Trends In Ai Generated Content 2024 And Beyond
Standing in front of the U.S. flag and dressed as Uncle Sam, Taylor Swift proudly proclaims that you should vote for Joe Biden for President. She then wants you to vote for Donald Trump in a nearly identical image circulated by former President Trump himself. Both the images, and the purported sentiments, are fabricated, the output of a generative AI tool used for creating and manipulating images. In fact, shortly after Donald Trump circulated his version of the image, and in response to the fear of spreading misinformation, the real Taylor Swift posted a real endorsement to her Instagram account, for... Generative AI is a powerful tool, both in elections and more generally in people’s personal, professional, and social lives.
In response, policymakers across the U.S. are exploring ways to mitigate risks associated with AI-generated content, also known as “synthetic” content. As generative AI makes it easier to create and distribute synthetic content that is indistinguishable from authentic or human-generated content, many are concerned about its potential growing use in political disinformation, scams, and abuse. Legislative proposals to address these risks often focus on disclosing the use of AI, increasing transparency around generative AI systems and content, and placing limitations on certain synthetic content. While these approaches may address some challenges with synthetic content, they also face a number of limitations and tradeoffs that policymakers should address going forward. Generally speaking, policymakers have sought to address the potential risks of synthetic content by promoting techniques for authenticating content, establishing requirements for disclosing the use of AI, and/or setting limitations on the creation and...
Authentication techniques, which involve verifying the source, history, and/or modifications to a piece of content, are intended to help people determine whether they’re interacting with an AI agent or AI-generated content, and to provide... Authentication often includes requiring the option to embed, attach, or track certain information in relation to content to provide others with more information about where the content came from, such as: A number of bills require or encourage the use of techniques like watermarking, provenance tracking, and metadata recording. Most notably, California AB 3211 regarding “Digital Content Provenance Standards,” which was proposed in 2024 but did not pass, sought to require generative AI providers to embed provenance information in synthetic content and provide... At the federal level, a bipartisan bill, the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act, has been introduced that would direct the National Institute of Standards and Technology (NIST)... If passed, the COPIED Act would build on NIST’s existing efforts to provide guidelines on synthetic content transparency techniques, as required by the White House Executive Order (EO) on the Safe, Secure, and Trustworthy...
Relatedly, policymakers are also exploring ways to improve transparency regarding synthetic content through labeling, disclosures, and detection. Some legislation, such as the recently-enacted Colorado AI Act and the pending federal AI Labeling Act of 2023, requires individuals or entities to label AI-generated content (labeling) or disclose the use of AI in... Other legislation focuses on synthetic content detection tools, which analyze content to determine whether it’s synthetic, and to provide further insight into the content. Detection tools can include those that evaluate the likelihood a given piece of content is AI-generated, as well as tools that can read watermarks, metadata, or provenance data to inform people about their background. For example, the recently-enacted California AI Transparency Act requires, among other things, generative AI system providers to make an AI detection tool available to its users. Separately, the Federal Communications Commission (FCC) is exploring creating rules around the use of technologies that analyze the content of private phone conversations to alert users that the voice on the other end of...
A first-in-the-nation Colorado law aims to protect consumers from risks posed by AI systems used in education, employment, financial or lending services, government, health care, housing and more. (Goads Agency/Getty Images) Lawmakers in 2024 proposed a variety of artificial intelligence-related bills in most U.S. states, Puerto Rico, the Virgin Islands and Washington, D.C. NCSL tracked over 450 bills in 23 different AI-related categories and found three legislative trends rising to the top: consumer protection, deepfakes and government use of AI. A couple of states passed first-in-the-nation AI legislation focused on consumer protections.
At least half the states addressed deepfakes through new laws targeting the technology’s use in elections and sexually explicit materials. Finally, most states considered or enacted bills related to government use of AI tools. Lawmakers considered over 100 bills in the two categories of private sector and responsible use. Three states passed the first U.S. laws focused on safety and protections for consumers when using AI products. Colorado passed the nation’s first comprehensive AI law (SB 205).
The new law applies to AI systems that can impact consumers in consequential ways in education, employment, financial or lending services, government, health care, housing and more. The law says AI developers and deployers must avoid algorithmic discrimination, defined as any use of AI that results in unlawful differential treatment or that disfavors a group of individuals protected under current state... Entities must also meet certain requirements that focus on consumer protections, risk management and transparency. Another new law (H 1468) created an AI task force to recommend changes to SB 205 prior to its implementation in 2026. Published online by Cambridge University Press: 18 March 2024 The rapid emergence of artificial intelligence (AI) technology and its application by businesses has created a potential need for governmental regulation.
While the federal government of the United States has largely sidestepped the issue of crafting law dictating limitations and expectations regarding the use of AI technology, US state legislatures have begun to take the... Nonetheless, we know very little about how state legislatures have approached the design, pursuit, and adoption of AI policy and whether traditional political fault lines have manifested themselves in the AI issue area. Here, we gather data on the state-level adoption of AI policy, as well as roll call voting on AI bills (classified on the basis of consumer protection versus economic development), by state legislatures and... We find that rising unemployment and inflation are negatively associated with a state’s AI policymaking. With respect to individual legislator support, we find that liberal lawmakers and Democrats are more likely to support bills establishing consumer protection requirements on AI usage. The results suggest that economic concerns loom large with AI and that traditional political fault lines may be establishing themselves in this area.
The presence and increasing utilization of artificial intelligence (AI) technology has the potential to transform the global economy, systems of international security, and even person to person interaction. However, in doing so, it undoubtedly creates challenges for governance. Concerns about how to regulate the use of AI technology by businesses, as well as how to manage the implications of the growth of AI on employment, will no doubt occupy the attention of... As one of the world’s largest economies and the home to a key wellspring of AI innovation in the Silicon Valley, the United States will arguably be a testing ground for emerging ideas about... Given the potential for AI to influence international trade and security, much of the attention on AI policymaking in the United States, particularly in the future, will focus on the federal government. Particularly given its clear influence on interstate commerce, it is uncontroversial to assume that the federal government will bring its vast resources to bear on developing and adopting a comprehensive AI regulatory strategy in...
At present, however, no such plan—or even dominant set of ideas—exists. As one major law firm, Alston and Bird, puts it “there is no comprehensive federal legislation on AI in the United States” Footnote 1 ; this is a point reiterated in New York Times:... rules.” Footnote 2 Despite federal inaction, US state governments have stepped into this void and developed their own AI policy agendas over the better part of the last five years. State-level AI policymaking has come from regionally and ideologically heterogeneous source states (both California and Mississippi, for example, have pursued AI policymaking attempts), may form the foundation upon which future federal-level AI regulatory policy... More broadly, as recent research has underscored, there is new evidence to suggest that the US states indeed function today as “laboratories of democracy,” in generating policy agendas for pursuit at the federal level,...
U.S. policymakers have continued to express interest in legislation to regulate artificial intelligence (“AI”), particularly at the state level. Although comprehensive AI bills and frameworks in Congress have received substantial attention, state legislatures also have been moving forward with their own efforts to regulate AI. This blog post summarizes key themes in state AI bills introduced in the past year. Now that new state legislative sessions have commenced, we expect to see even more activity in the months ahead. We will continue to monitor these and related developments across our blogs.
Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the… Yaron Dori has over 25 years of experience advising technology, telecommunications, media, life sciences, and other types of companies on their most pressing business challenges. He is a former chair of the firm’s technology, communications and media practices and currently serves on the firm’s eight-person Management Committee. Yaron’s practice advises clients on strategic planning, policy development, transactions, investigations and enforcement, and regulatory compliance. In a Senate Committee on Commerce, Science, and Transportation executive session on July 31, 2024, a group of senators considered and passed on a bipartisan basis a slate of ten legislative measures on key...
The agenda included key bipartisan legislation addressing a range of concerns and priorities on AI such as regulation, standards and accountability, innovation and research and development promotion, public education, and protecting people from becoming... Advancing US leadership in the context of competition with foreign adversaries such as China was a recurring theme for many of the senators. The markup followed the bipartisan AI Insight Forums in fall 2023 and the subsequent roadmap for AI policy in the US Senate, which provided recommendations on areas to legislate on AI. The markup came just days before the Senate is set to adjourn for its August recess and did not include the American Privacy Rights Act that was the subject of a July 11, 2024... Tech industry leaders and AI safety advocates alike have been eagerly awaiting action on many of the initiatives that the committee considered. The committee pulled some of the bills from a scheduled markup in May 2024, including the Future of AI Innovation Act and the CREATE AI Act.
Given the narrow legislative window with the November elections looming, there will be very limited time to pass these bills on the Senate floor, but some of the bills’ provisions adopted by the committee... The future of American AI governance is at a crossroads as states and the federal government wrestle over who should regulate AI and the appropriate scope of such regulation. Currently, most AI regulation in the United States occurs in the states, with thousands of AI laws introduced in state legislatures and hundreds enacted this year alone. This has created a patchwork of laws and regulations that many see as harmful to the country’s national competitiveness and ability to remain on the technological frontier. After a failed attempt at a federal moratorium on state AI lawmaking earlier this year, policymakers in Congress have sought to mitigate or fully preempt the state law patchwork and give themselves more time—and... Some members of Congress have proposed regulatory flexibility for companies caught in the state patchwork.
The White House outlined federal priorities in its AI Action Plan, and a draft executive order recently proposed potential litigation against state AI laws. The Bipartisan Policy Center is dedicated to contributing resources and scaling ongoing efforts to educate policymakers and the public on AI, including its foundations, emerging use cases, and policy landscape. Through the AI 101 initiative launched in 2024, BPC has briefed and engaged hundreds of Hill staffers to build stronger AI literacy foundations. Earlier this year, BPC submitted recommendations to the White House and examined the administration’s AI Action Plan release in its “From Vision to Action” series. Below, we offer eight lessons for the future of AI governance to inform continued efforts by state and federal policymakers. Policy Considerations Shaping the Future of AI Governance
Although the AI moratorium did not become law, it demonstrated that preempting state authority without a substantive federal replacement is a hard sell: states will continue to legislate, and businesses and the public will... As Congress considers the future of AI governance, the core challenge will be designing bipartisan approaches that provide national assurances on privacy, accountability, and transparency while preserving space for states to test, refine, and... BPC will continue to monitor this space and is committed to advancing policies that support a coherent, adaptive, and future-ready approach to AI. Offering ad-hoc support, implementation programs, governance, audits, SAR, breach management, and vendor assessments. Prepare for the AI future with our comprehensive compliance solutions Your first line of defence, our GDPR Art.27 EU/UK Rep and Swiss Representative services are designed to keep you safe.
Our highest level of support. We take accountability for your compliance.We help ensure you meet your compliance obligations. Certified, experience, professional support from our team of AI and data protection experts. In 2024, artificial intelligence (AI) took center stage in legislative discussions across the United States, reflecting the rapid growth of this transformative technology. With every state introducing some form of AI-related legislation, the past year demonstrated the urgency among lawmakers to establish frameworks addressing AI’s societal and economic impacts. From ensuring transparency in AI-generated content to regulating advanced applications like deepfakes and digital replicas, state governments sought to tackle the opportunities and challenges posed by this technology.
People Also Search
- U.S. Legislative Trends in AI-Generated Content: 2024 and Beyond
- 3 Trends Emerge as AI Legislation Gains Momentum
- Artificial Intelligence (AI): Innovations within the Legislative Branch ...
- Investigating the politics and content of US State artificial ...
- Trends in AI: U.S. State Legislative Developments
- Major AI legislation advances in Senate: Key points - DLA Piper
- Eight Considerations to Shape the Future of AI Governance
- The evolving AI regulation space: A preliminary analysis of US state ...
- AI: The Next Wave of US Legislative Developments - GDPR Local
- Recapping the State-Level AI Policy Boom in 2024
Standing In Front Of The U.S. Flag And Dressed As
Standing in front of the U.S. flag and dressed as Uncle Sam, Taylor Swift proudly proclaims that you should vote for Joe Biden for President. She then wants you to vote for Donald Trump in a nearly identical image circulated by former President Trump himself. Both the images, and the purported sentiments, are fabricated, the output of a generative AI tool used for creating and manipulating images....
In Response, Policymakers Across The U.S. Are Exploring Ways To
In response, policymakers across the U.S. are exploring ways to mitigate risks associated with AI-generated content, also known as “synthetic” content. As generative AI makes it easier to create and distribute synthetic content that is indistinguishable from authentic or human-generated content, many are concerned about its potential growing use in political disinformation, scams, and abuse. Legis...
Authentication Techniques, Which Involve Verifying The Source, History, And/or Modifications
Authentication techniques, which involve verifying the source, history, and/or modifications to a piece of content, are intended to help people determine whether they’re interacting with an AI agent or AI-generated content, and to provide... Authentication often includes requiring the option to embed, attach, or track certain information in relation to content to provide others with more informati...
Relatedly, Policymakers Are Also Exploring Ways To Improve Transparency Regarding
Relatedly, policymakers are also exploring ways to improve transparency regarding synthetic content through labeling, disclosures, and detection. Some legislation, such as the recently-enacted Colorado AI Act and the pending federal AI Labeling Act of 2023, requires individuals or entities to label AI-generated content (labeling) or disclose the use of AI in... Other legislation focuses on synthet...
A First-in-the-nation Colorado Law Aims To Protect Consumers From Risks
A first-in-the-nation Colorado law aims to protect consumers from risks posed by AI systems used in education, employment, financial or lending services, government, health care, housing and more. (Goads Agency/Getty Images) Lawmakers in 2024 proposed a variety of artificial intelligence-related bills in most U.S. states, Puerto Rico, the Virgin Islands and Washington, D.C. NCSL tracked over 450 b...