A New Age of Publicity: The NO FAKES Act and Federal Regulation on AI Replicas

In an age dominated by artificial intelligence (AI), the entertainment industry has evolved to reflect the capacity of new technologies to alter, enhance, or replicate human performances. As actors earn Oscars for AI-assisted performances and bands win Grammys for AI-generated music, questions have arisen concerning the future of art as human creation. The past several years have seen several lawsuits filed, contracts revised, and restrictions set in an attempt to regulate AI usage in the industry. One such effort, the 2024 Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, was recently introduced in both the Senate and the House of Representatives as an endeavor to protect actors’ and performers’ likenesses from the misuse of deepfake technology. [1] While preemptive state laws and agencies like the U.S. Copyright Office support the creation of a national standard for restrictions on AI, the proposed legislation, as outlined in the current version of the NO FAKES Act, would encroach upon First Amendment rights protecting freedom of speech in digital spaces as interpreted and enforced by the current administration. [2]

The NO FAKES Act aims to guarantee the first federal right of publicity to individuals whose likenesses or identities may be at risk of AI or deepfake replication. The right of publicity, which primarily works to protect actors and performers from unauthorized third-party usage of their likeness or identity for commercial gain, is a historically divisive legal topic in the United States. [3] The idea traces back to landmark cases such as Lugosi v. Universal Pictures (1979) and Midler v. Ford Motor Co. (1988), which paved the way for understanding and enforcing protections of individuals’ likenesses. The outcomes of these cases set the precedent for the recognition of intentional replication or imitation of an individual’s likeness for commercial purposes as unlawful under the right of publicity, but determined that this right was not descendible to one’s heirs. [4] Later, several states established legislation creating an inheritable right of publicity for celebrities that extends past one’s lifetime and may be passed on to an estate upon death, likening a celebrity’s persona, including their name and image, to transferable property due to its pecuniary value. However, technological advancements have only further complicated the issue by raising questions about whether or not unauthorized third-party digital replication of an actor or performer’s likeness in media constitutes an intrusion upon this right of publicity. 

Recent and ongoing cases surrounding conflicts of this nature present little to no definitive precedent upon which future decisions may be made, demonstrating a need for more substantial legislation as issues related to the right of publicity with regard to AI usage become increasingly prevalent. In Main Sequence v. Dudesy (2024), Main Sequence, representing the estate of late comedian George Carlin, claimed that podcast Dudesy had encroached upon Carlin’s postmortem right of publicity by featuring an unauthorized digital replication of his voice, created using deepfake technology, in their 2024 YouTube special, “George Carlin: I’m Glad I’m Dead.” [5] The settlement reached between the two parties, in which the defendant agreed to remove the special from social media and refrain from using Carlin’s likeness in future content without permission, indicates a potential acknowledgement of AI misuse on the defendant’s part, but notably did not determine the applicability of the order to third parties. [6] This outcome will likely steer third parties away from attempting to replicate or imitate actors’ or performers’ likenesses in future media without obtaining either prior consent or permission from their estates. Similarly, Lehrman v. Lovo is an ongoing legal battle between voice actors Paul Lehrman and Linnea Sage and startup company Lovo concerning the defendant’s alleged use of the plaintiffs’ voices in a generative AI-based text-to-speech tool. [7] In both cases, the plaintiffs argued that the defendants’ unauthorized use of their voice, image, or otherwise recognizable likeness constitutes a violation of their right of publicity. While the court has yet to reach a decision that would determine whether this form of AI usage is permitted under current copyright law, the outcome of this case will be vital in understanding how the threat of unauthorized digital replicas will be handled in the legal setting.

In light of this unresolved discourse, organizations such as the United States Copyright Office have asserted an urgent need for federal legislation to address this widespread usage of AI and deepfake technologies. However, an attempt to establish a federal right of publicity preventing unauthorized digital replicas, as proposed in the NO FAKES Act, will most likely be struck down if it is determined that such legislation could pose a threat to citizens’ free speech rights. Content created for online posts, commercials, and other purposes that recent conflicts like the aforementioned cases primarily encompass, is largely protected under free speech with few exceptions. Furthermore, previous motions to impose similar restrictions on media to protect individuals against harmful exploitation have been subject to criticism on the basis of the First Amendment right to freedom of speech. In Reno v. ACLU, the Supreme Court ruled that the Communications Decency Act of 1996, which set out to ban certain forms of offensive and explicit content from the Internet in an effort to protect minors, on the basis that such sweeping regulations would constitute a “content-based blanket restriction of free speech” that would consequently encroach upon Internet-users’ First Amendment rights. [8] Similarly, in Ashcroft v. ACLU, the Supreme Court decided in a 2004 ruling that requirements put in place to restrict accessibility to pornographic material online, established in accordance with the Child Online Protection Act to keep such content from minors, could again potentially infringe upon First Amendment rights. [9] Both cases exemplify the Supreme Court’s repeated tendency to err on the side of caution with respect to preserving First Amendment rights on the Internet, electing to disallow regulations that might interfere with freedom of speech, rather than enforce the intended protections, deeming the measures taken to do so unconstitutional.

The NO FAKES Act, as outlined in the bill introduced in the Senate on July 31, 2024, addresses the threat that unlicensed digital replica, which it defines as computer-generated electronic recreations of or alterations to an individual’s voice and/or visual likeness, pose to that individual’s intellectual property rights. The bill suggests that not only should individuals who knowingly publicize a digital replica of an individual without their consent be held liable for any possible harm, but online services should also be required to take action to remove all user-uploaded material that “is claimed to be an unauthorized digital replica.” Exceptions would be made for actions that fall under either exclusions explicitly in the legislation or the fair use doctrine, which permits the unauthorized use of copyrighted materials for certain functions, including criticism, commentary, and news reporting [10]. Content featuring AI-generated replicas created for these purposes would therefore theoretically avoid scrutiny under the proposed NO FAKES Act. However, certain types of lawful speech, such as citizen reporting or news broadcasting, may be interpreted as unprotected under the NO FAKES Act due to the vagueness of the language in the bill [11]. Unless alterations are made to the text to explicitly preserve citizens’ freedom of speech, the NO FAKES Act will likely be viewed as an overly broad generalization of AI-generated replicas as a subsect of media that is inherently harmful. Following the previous administration’s endeavor to combat the spread of “misinformation,” “disinformation,” and “malinformation” on social media, the current administration has asserted in a presidential action made on January 20, 2025 that it would be taking action to “restor[e] freedom of speech and end federal censorship.” [12] While the NO FAKES Act was successfully introduced in the Senate as of July 31, 2024, the current administration’s stance on enforcement of the First Amendment, as expressed in the action, thus signals that the bill will most likely fail to receive presidential approval. Based on the precedent established in the Supreme Court rulings for Reno v. ACLU and Ashcroft v. ACLU discussed above, the NO FAKES Act, as it is currently written, would likely be considered a blanket restriction of First Amendment rights and thus be thrown out in court. 

With so much uncertainty surrounding the future of this issue, the decisions in ongoing cases like Lehrman v. Lovo will determine the scope of the need for legislation called for in documents like the Report of the Register of Copyrights. Efforts to preserve intellectual property rights and First Amendment rights will remain at odds, as the question of ownership with regards to the likeness of any individual, especially an actor or performer, continues to be debated. Considering the volume of today’s media being generated or enhanced using AI technology, legislators must soon determine whether or not AI-based replicas should be protected under free speech or the fair use doctrine in order to prevent further harm. However, the NO FAKES Act, as currently proposed, will likely be interpreted as an obstruction of the First Amendment right to freedom of personal, artistic and commercial speech. While the NO FAKES Act will likely not pass due to perceived threat for this reason, future attempts to implement more specific legislation of this nature should not be abandoned. Rather, a stricter right of publicity will be imperative in preserving actors’ and performers’ personal ownership of their own likeness in this age of AI, so long as the First Amendment is protected under all cases. 

Edited by Andrew Chung 

[1] NO FAKES Act of 2024, S. 4875, 118th Cong.  (2024), https://www.congress.gov/bill/118th-congress/senate-bill/4875/text. 

[2] “A Report of the Register of Copyrights: Copyright and Artificial Intelligence” (U.S. Copyright Office, 2024), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-1-Digital-Replicas-Report.pdf. 

[3] “Select Right of Publicity Cases,” https://rightofpublicity.com/notable-cases. 

[4] Lugosi v. Universal Pictures, 25 Cal. 3d 813 (1979)

[5] Main Sequence, Ltd. v. Dudesy, LLC, 2:24-cv-00711 (C.D. Cal. 2024)

[6] Main Sequence, Ltd. v. Dudesy, LLC, 2:24-cv-00711 (C.D. Cal. 2024)

[7] Lehrman v. Lovo, Inc, 1:24-cv-03770 (S.D.N.Y. 2025)

[8] Reno v. American Civil Liberties Union, 521 US 844 (1997)

[9] Ashcroft v. American Civil Liberties Union, 542 US 656 (2004)[10] “U.S. Copyright Office Fair Use Index,” U.S. Copyright Office, https://www.copyright.gov/fair-use/.

[11] “NO FAKES Act,” The Electronic Frontier Foundation, https://www.eff.org/files/2024/09/12/2024.11_no_fakes_one_pager.pdf.

[12] “Restoring Freedom of Speech and Ending Federal Censorship” (The White House, 2025), https://www.whitehouse.gov/presidential-actions/2025/01/restoring-freedom-of-speech-and-ending-federal-censorship/.