Regulating Deepfakes Legal Approaches To Combating Synthetic Media

Bonisiwe Shabane
-
regulating deepfakes legal approaches to combating synthetic media

The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's... The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materials can deprive the public of the accurate information it needs to make informed decisions in elections. Deepfakes are created using AI, which combines different technologies to produce synthetic content. Deepfakes are synthetically generated content created using artificial intelligence (AI).

This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious. An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movements with different voices in nine languages, amplified its anti-malaria message. Deepfakes have a dark side too. They have been used to spread false information, manipulate public opinion, and damage reputations. They can harm mental health and have significant social impacts.

The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches. India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief caused. Find out more about Lexology or get in touch by visiting our About page. Deepfake legislation in the U.S. is advancing swiftly to combat the rising risks associated with synthetic media, addressing critical areas such as cybersecurity, privacy, election integrity, and intellectual property.

Federal and state lawmakers are enacting and refining laws to curb the misuse of deepfake technology, focusing on issues like fraud, defamation, election manipulation, and non-consensual explicit content. These evolving regulations aim to safeguard individuals, institutions, and democratic processes against the growing challenges posed by this rapidly advancing technology. With the rapid advancement of technology, it is crucial to establish proper legislation to ensure that these powerful tools are used responsibly. Clear guidelines are necessary to maximize their benefits while minimizing risks, protecting against malicious use, and safeguarding societal well-being. What are Federal Laws that Address Deepfakes or Artificial Intelligence (AI)? National Defense Authorization Act (NDAA)

Deep fake technology presents a profound challenge to data protection, privacy and regulatory frameworks worldwide. By exploiting biometric data without consent, deep fakes pose severe threats to privacy frameworks such as the European Union’s (EU) General Data Protection Regulation (GDPR) and India’s Digital Personal Data Protection Act 2023 (DPDPA). The ability to manipulate digital content using artificial intelligence (AI) raises concerns over identity theft, misinformation and biometric data security. This paper examines regulatory gaps, emerging AI-driven detection strategies and the need for privacy-preserving technological solutions. Through a comparative legal analysis, we identify gaps in existing regulations and propose a privacy-centric framework for mitigating deep fake risks. We further examine AI-driven solutions for authentication and policy interventions necessary for global regulatory alignment.

Our findings suggest a multitiered regulatory response integrating technology, governance and privacy laws to counter deep fake threats while protecting individual rights. This article is also included in The Business & Management Collection which can be accessed at https://hstalks.com/business/. The full article is available to subscribers to the journal. Sanya Darakhshan Kishwar is an Assistant Professor at Jindal Global Law School. She is pursuing her Ph.D. from National Law University, Delhi and holds Master’s in Law (LL.M.) from the University of Leeds, U.K.

and The Pennsylvania State University, USA. Her areas of research include feminist theories and international human rights law. Anjali Tripathi is a fourth-year law student at Jindal Global Law School whose work as lead researcher on multiple academic projects reflects her strong scholarship in IP and technology law alongside an interest in... She has also pursued a short term programme on IP laws at the University of Oxford. Her publications include a notable mention in the prestigious Shamnad Basheer Essay Competition. Sadqua Khatoon is the recipient of the Sir Syed Global Scholar Award 2025.

She served as Project Manager for State of Youth (NGO) under the KidsRights Foundation, Netherlands. Sadqua also held the role of Joint Secretary of the International Law and Diplomacy Society at the Faculty of Law, Aligarh Muslim University. From manipulated political videos to AI-generated intimate images, deepfakes—realistic media produced by artificial intelligence—are reshaping the legal landscape. As generative AI tools grow in accessibility and sophistication, states are racing to enact laws that require disclosure of synthetic content. These “deepfake disclosure laws” represent a distinct regulatory strategy: rather than banning the technology outright, they aim to mitigate harm through transparency. At the time of this writing, over a dozen states have enacted disclosure statutes targeting synthetic media, particularly in political speech and commercial impersonation.

This article surveys the current state-level frameworks, explores their constitutional underpinnings, and considers the evolving role of disclosure in synthetic media regulation. Despite several proposed federal measures—such as the DEEPFAKES Accountability Act and the REAL Political Advertisements Act—Congress has yet to enact comprehensive AI disclosure legislation. These bills typically require watermarks, disclaimers, or metadata tagging for synthetic media, especially in election contexts. However, none have cleared both chambers. Notably, the TAKE IT DOWN Act, enacted in 2025, addresses the spread of nonconsensual AI-generated intimate images. While it mandates takedown mechanisms on major platforms, it stops short of requiring content labeling at the source.

This legislative void has left states with considerable latitude to experiment with their own disclosure schemes.

People Also Search

The 2020s Mark The Emergence Of Deepfakes In General Media

The 2020s mark the emergence of deepfakes in general media discourse. The rise in deepfake technology is defined by a very simple yet concerning fact: it is now possible to create perfect imitations of anyone using AI tools that can create audio in any person's... The proliferation of deepfake content in the media poses great challenges to the functioning of democracies. especially as such materia...

This Technology Works On An Advanced Algorithm That Creates Hyper-realistic

This technology works on an advanced algorithm that creates hyper-realistic videos by using a person’s face, voice or likeness utilising techniques such as machine learning. The utilisation and progression of deepfake technology holds vast potential, both benign and malicious. An example is when the NGO Malaria No More which had used deepfake technology in 2019 to sync David Beckham’s lip movement...

The Ease Of Creating Deepfakes Makes It Difficult To Verify

The ease of creating deepfakes makes it difficult to verify media authenticity, eroding trust in journalism and creating confusion about what is true and what is not. Their potential to cause harm has made it necessary to consider legal and regulatory approaches. India presently lacks a specific law dealing with deepfakes, but the existing legal provisions offer some safeguards against mischief ca...

Federal And State Lawmakers Are Enacting And Refining Laws To

Federal and state lawmakers are enacting and refining laws to curb the misuse of deepfake technology, focusing on issues like fraud, defamation, election manipulation, and non-consensual explicit content. These evolving regulations aim to safeguard individuals, institutions, and democratic processes against the growing challenges posed by this rapidly advancing technology. With the rapid advanceme...

Deep Fake Technology Presents A Profound Challenge To Data Protection,

Deep fake technology presents a profound challenge to data protection, privacy and regulatory frameworks worldwide. By exploiting biometric data without consent, deep fakes pose severe threats to privacy frameworks such as the European Union’s (EU) General Data Protection Regulation (GDPR) and India’s Digital Personal Data Protection Act 2023 (DPDPA). The ability to manipulate digital content usin...