Artificial Intelligence Regulation: Driving Digital Platforms Towards Authenticity Enforcement
In the digital age, the line between reality and artificial intelligence (AI) is becoming increasingly blurred. This is particularly true when it comes to content creation, where AI-generated images, videos, and audio are becoming more prevalent. To address this issue, the U.S. has introduced the No Fakes Act, a legislative bill aimed at regulating unauthorized AI-generated content that impersonates individuals' voices, faces, and likenesses.
The No Fakes Act, introduced in 2023 by a bipartisan group of U.S. senators, targets tools that generate unauthorized AI images or replicas of individuals. Anyone who creates, markets, or hosts such tools could be held liable if these are primarily designed to produce unauthorized images. The Act also mandates a notice and takedown system, similar to the DMCA, but with fewer safeguards. Service providers are required not only to remove targeted materials but also to implement broad upload filters to prevent future unauthorized content from appearing.
Critics argue that these provisions grant rights-holders veto power over innovation and could impose heavy burdens on developers and platforms based on mere allegations of unauthorized use. There is active legislative and public debate, with groups such as the Library Copyright Alliance requesting amendments to protect certain uses, like education, from liability under the Act.
On an international scale, Denmark is advancing its own laws that recognize individuals’ likenesses as intellectual property. Denmark’s law would make it illegal to use someone’s voice, face, or body in AI-generated content without permission. This law explicitly excludes satire and parody as long as they are not misleading or harmful, and emphasizes cooperation from platforms to avoid fines or actions from authorities.
For platforms, both the No Fakes Act and laws like Denmark’s effectively require implementing detection and filtering technologies to prevent unauthorized AI-generated impersonations, responding quickly to takedown requests, and proactively policing content. Failure to comply could result in liability if unauthorized content spreads through their services or if they fail to cooperate with authorities.
Major platforms like YouTube, Google, Disney, and YouTube have voiced support for the No Fakes Act. Meta (Facebook and Instagram) has introduced "Imagined with AI" labels on some images created with generative tools and plans to expand this labeling to video and audio. TikTok has made some progress by labeling AI-generated content using embedded metadata and joining the Coalition for Content Provenance and Authenticity (C2PA). However, TikTok's labeling is often limited to content created with in-app tools, and many videos created outside the app are uploaded without any form of disclosure.
Spotify has taken a firm stance against impersonation and removed AI-generated songs that copied the voices of major artists like Drake and The Weeknd. However, Spotify doesn't have much transparency for artists or listeners, with users rarely told when a song is generated by AI. User awareness remains low on Meta's platforms, with many people scrolling past AI-manipulated posts without realizing they've been altered.
The Human Artistry Campaign, supported by organizations like the RIAA, SAG-AFTRA, and Universal Music Group, is promoting seven key principles to ensure AI tools are used in ways that support artists. The campaign urges companies to build detection tools for unauthorized use, update policies to reflect emerging AI risks, and foster transparent, creator-focused environments.
Talent agencies and record labels are updating their roles to better protect the artists they represent in a landscape shaped by AI. Platforms that wait for perfect detection tools or public pressure before acting risk losing credibility. The rise of fake or altered content is reshaping legal, cultural, and commercial systems.
In conclusion, the No Fakes Act and similar international laws represent a significant shift toward tightening control over AI-generated content involving personal likenesses. While these laws protect individual rights, they also raise concerns about overbroad censorship and innovation constraints in AI development and online speech. As AI continues to evolve, it is crucial for platforms, policymakers, and the public to engage in ongoing discussions about the ethical and legal implications of AI-generated content.
References:
[1] Electronic Frontier Foundation. (2023). No Fakes Act: A Threat to Internet Speech and Innovation. Retrieved from https://www.eff.org/deeplinks/2023/03/no-fakes-act-threat-internet-speech-and-innovation
[2] European Parliament. (2025). Denmark's AI Likeness Law: A Case Study in Personal Data Protection. Retrieved from https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/648336/EPRS_BRI(2025)648336_EN.pdf
[3] Library Copyright Alliance. (2025). Amendments Needed to Protect Education Under the No Fakes Act. Retrieved from https://www.librarycopyrightalliance.org/statement/amendments-needed-to-protect-education-under-the-no-fakes-act
[4] Danish Ministry of Culture. (2025). AI Likeness Law: Frequently Asked Questions. Retrieved from https://kultur.dk/ai-likeness-law-frequently-asked-questions
The No Fakes Act, introduced in 2023, mandates provisions for a notice and takedown system, requiring service providers to implement broad upload filters and remove targeted materials that generate unauthorized AI images or replicas of individuals. International laws, such as Denmark's, recognize individuals’ likenesses as intellectual property, making it illegal to use someone’s voice, face, or body in AI-generated content without permission.
The Human Artistry Campaign, supported by organizations like the RIAA, SAG-AFTRA, and Universal Music Group, urges companies to build detection tools for unauthorized use, update policies to reflect emerging AI risks, and foster transparent, creator-focused environments, emphasizing the need for ongoing discussions about the ethical and legal implications of AI-generated content in entertainment, politics, general news, and technology.