Artificial Intelligence Regulation Prompts Digital Platforms to Enforce Authenticity
=================================================================================
The No Fakes Act, a federal bill introduced in 2023 by a bipartisan group of U.S. senators, is currently under consideration. This bill is aimed at preventing the unauthorized use of a person's voice, face, or likeness in AI-generated content, particularly targeting deepfakes and digital replicas.
The No Fakes Act offers clear legal boundaries for how someone's voice, image, and likeness can be used. It gives stronger protection to individuals and puts more responsibility on platforms, studios, and agencies that host or publish AI-generated content. The bill introduces a proactive framework for defining digital consent and preventing misuse.
However, the bill has undergone revisions that have drawn significant criticism regarding its impact on innovation and internet speech. Expanded Scope and Censorship Concerns: The updated bill has expanded from focusing solely on unauthorized digital replicas to also targeting the tools used to create such images and content. This means that developers, marketers, or hosts of AI tools capable of generating unauthorized likenesses could face liability, effectively giving rights-holders a veto over innovation in AI generation.
Mandatory Filters and Takedown Requirements: The bill mandates a broad notice-and-takedown system similar to the DMCA but with fewer safeguards. Platforms would be required not only to remove infringing content but also to implement proactive filtering to prevent its re-upload, raising concerns of overbroad censorship and significant technical burdens on tech platforms.
Industry Pushback and Calls for Amendments: Groups like the Library Copyright Alliance have raised concerns, requesting amendments to clarify exemptions, such as for educational uses, indicating ongoing debate and friction around the bill’s provisions.
Comparison with International Efforts: Meanwhile, countries like Denmark are advancing AI laws that specifically protect individuals’ control over their digital likeness and voice, requiring explicit consent before such data can be used in AI-generated content. Denmark’s approach includes clear consumer rights and obligations for platforms to remove unauthorized deepfake content under penalty, aiming to balance protection with clarity for AI developers.
If enacted as currently drafted, the No Fakes Act could impose heavy compliance requirements on tech platforms, forcing them to deploy broad content filtering and takedown systems. The entertainment industry, which often uses digital likenesses for creative and commercial purposes, might face restrictions or increased liability risks for AI-generated content using performers' likenesses without consent. These requirements could hinder innovation in AI content creation and distribution rather than fostering responsible use.
Some platforms struggle to keep up with AI-generated content enforcement. For example, TikTok's moderation teams can't always catch synthetic content in time. Spotify removed AI-generated songs that copied the voices of major artists like Drake and The Weeknd in 2023. Meta plans to expand the labeling of AI-generated content to video and audio on its platforms, while YouTube has introduced policies requiring creators to label videos that include altered or AI-generated content.
Talent agencies like Creative Artists Agency (CAA) are helping clients manage digital risks alongside traditional career support. TikTok has made some progress by labeling AI-generated content using embedded metadata and joining the Coalition for Content Provenance and Authenticity (C2PA). Our website offers a hassle-free verification process for businesses, envisioning a user-centric internet where individuals maintain control over their data.
The Human Artistry Campaign focuses on making sure AI tools are used in ways that support artists rather than replace or exploit them. The campaign promotes seven key principles, including the need to get permission before using someone's voice or image, credit original creators, and ensure artists are paid fairly.
In summary, the No Fakes Act is still under legislative consideration and has become more stringent and controversial, with significant concerns about its potential chilling effects on AI innovation, online speech, and tech platform operations. Meanwhile, international models like Denmark’s AI deepfake law provide a contrasting, more consent-focused framework that could influence future U.S. policymaking. At present, the act’s precise legal and industry impacts remain uncertain, pending further debate and possible amendments.
References:
- Electronic Frontier Foundation
- Denmark's AI law
- Library Copyright Alliance
- The Washington Post
- The No Fakes Act, if enacted, could impact both the technology sector and the entertainment industry by potentially restricting the use of AI-generated content in creative or commercial purposes.
- As the No Fakes Act continues to undergo debate, the international approach taken by Denmark's AI deepfake law, which emphasizes digital consent, could serve as a model for balancing protection with innovation in AI content creation and distribution.