AI vs. Identity: Defending Artists' Voices & Likeness

[Thoughtful synthetic music fades in and plays for 15 seconds then fades to a low volume]

Ashley Pelham: Hi everyone. Welcome or welcome back to Low of the Land. I'm Ashley, a senior at Columbia University, and I'm so excited to be here today to explore how emerging technologies like AI are challenging the effectiveness of the right of publicity for creatives in the entertainment industry. Today, we're going to take a closer look at the right of publicity, which gives people the right to control the commercial use of their name, image and likeness. Name, image and likeness, or NIL, play a huge role in how recording artists, actors and all creatives control the use of their identity. However, that landscape is quickly changing in light of new technologies. In 2024, a bill entitled The NO FAKES Act was introduced and attempts to protect the rights of creatives amidst the age of artificial intelligence. The bill itself has significant implications for NIL since AI has unlocked an entirely new set of ways that a creative's identity can be used without their permission, such as unauthorized voice clones or deep fakes. However, it is highly debated whether this bill, or any other can adequately protect the rights of creatives amidst the uncertainty that AI brings.

[Thoughtful synthetic music plays for 5 seconds]

Pelham: Fortunately, I am joined by a special guest who will help us hone in on the legal intricacies of this topic so that we can have a fuller understanding of how these rights work in the context of entertainment. Our guest, Mr. Mark Lee, is a partner at Rimon PC. He teaches Entertainment Television and Digital Media law at the USC Gould School of Law, and he is the author of Entertainment and Intellectual Property. So, of most relevance to today's discussion, he has counseled or litigated concerning the rights of publicity of Elvis Presley, Frank Sinatra, Miles Davis, Tiger Woods, Jim Brown, Doris Day and many others. We're so excited to have you here today. Mr. Lee, welcome to the podcast.

[music fades out]

Mark Lee, USC Gould School of Law: Thank you. Good to be here.

Pelham: Awesome. So taking a bit of a step back, I'd love for us to start by clarifying what these ideas of name, image and likeness actually are. I mean, what do they mean, and how do they practically impact creatives?

Lee: Yeah, what's called the right of publicity, in my view, really could be the most intuitive of the intellectual property rights. If copyright law protects what you create, and trademark law protects what you symbolize, the right of publicity protects who you are. It allows you to prevent others from commercially exploiting your identity without your permission. Name, image and likeness is a term that's been thrown around a lot. It's actually kind of a recent term. The NCAA used it to describe a policy it enacted in 2021 in response to a class action lawsuit brought on behalf of collegiate athletes throughout the country. And name, image and likeness is one subset of the right of publicity, and it specifically, as a policy, allows college athletes to prevent others from engaging in the unauthorized use of their name, image and likeness, and also gives them the right to commercially exploit that name, image and likeness while ever in college.

Pelham: Okay, that makes sense. It's really interesting to sort of see how that evolution of where it started and sort of where it is now really takes place in the entertainment landscape. But just because I'm a little bit curious, under what circumstances would you say it is usually invoked, whether that be with athletes or, you know, with creatives, musicians, etc.

Lee: Yeah, everyone owns publicity under the laws of most states, but the people who tend to invoke it the most are, as you would expect, celebrities. Fame has value. The right of publicity allows everyone to protect that value, but celebrities have the most to protect right because other people want to use their celebrity to sell goods, products, services, and so most often, you'll find that someone who you know very well, or the estate of someone you know very well, will file suit or take other action to prevent others from commercially exploiting that thing.

Pelham: Okay, that makes sense. I definitely can see why celebrities would be the quickest to sort of invoke that right, given what they have to lose and sort of the stakes associated with their stance. So definitely makes sense on that point. From your perspective, do you think that the right of publicity, sort of as it currently stands within the law, is effective for protecting someone's right to control their NIL?

Lee: In many circumstances, yes, if you're talking about the typical right of publicity case, which would, for example, involve use of someone's name or image or likeness in an advertisement without permission, then yes, current right of publicity law effectively, or at least, gives effective legal rights to people whose identities are exploited, and allows them to seek judicial relief to stop that sort of unauthorized exploitation. In other circumstances, I would have to say, probably not. Those other circumstances involve situations in which some aspects of one's identity is exploited, but perhaps not exploited in advertising or in merchandise. For example, one can envision a video presently in which someone makes it appear that someone says something or does something that they didn't really do or say. If the statement is defamatory, one might have rights under defamation law. Often it may not be, but it may be misleading and may give the wrong impression about what someone said or did. And in many circumstances, that can be very harmful, but it is current right of publicity law does not effectively prevent that in most states. You mentioned the NO FAKES act, that is a federal law that's been proposed, although not enacted. Several states do have prohibitions now against digital impersonators. How effective those laws will be is a little unclear, because they haven't been judicially tested yet. And the big problem is the first the First Amendment. Everyone has a First Amendment right to tell stories and to tell stories about others, so long as they're not defamatory. And that's fine. That's what much news reporting is. But when you make it appear as if someone really said or did something they didn't do in a way that could be harmful to the person, I'm afraid right of publicity law does not effectively prevent that right now.

Pelham: That’s interesting, yeah, sort of seeing that tension between, you know, First Amendment freedoms to tell stories and tell them how you want to but also the right of publicity and potential defamation. That's an interesting sort of crossroads that people find themselves in. So thank you for that context, I guess, sort of pivoting a little bit to artificial intelligence, as we sort of began to allude towards with the discussion of the NO FAKES Act, in your opinion, how has AI impacted the efficacy of the right of publicity for creatives, specifically when, as you brought up, elements of their identity are being imitated without their permission, using new technologies that, to be honest, people don't necessarily fully understand at this point, right?

Lee: And the answer is, it's made it much easier for people or companies to utilize someone else's identity in a way that the person might not wish. As I mentioned before, everyone has a First Amendment right to tell stories about others. Many actors have portrayed celebrities in motion pictures, and certainly the First Amendment permits that. But with AI, one could make it appear as if the actual person was rendering a performance he or she did not actually render and I'm afraid current right of publicity law doesn't prevent it necessarily, at least not effectively under the laws of most states. And so it would be very difficult for someone who objected to being portrayed that way and made to appear that they rendered a performance when they hadn't. And so it makes it much easier for what I would view as a right of publicity violation to occur.

Pelham: Right. That makes sense. It's sort of a situation in which, you know, if a video or something comes out where it looks like someone else is saying something they didn't, it puts you in an interesting position, because, you know, people have already seen it, been exposed to it, and that line of exposure has already been crossed, which is complicated by these technologies. So it definitely makes sense. But in a perfect world, obviously we don't live in one, but if we were to how do you think that the right of publicity should work in light of the rise of AI, given that, like you said, there are so many circumstances where you know our senses can deceive us in terms of what is true and what isn't.

Lee: That's a very interesting question. The fact is, there's always been a tension between the right of publicity and the right to free expression in the First Amendment. Courts have grappled with that issue for literally decades. Various courts around the country have arrived at different standards, tests and guidelines that I don't think are necessarily consistent. The Supreme Court actually addressed the issue once, in a case brought by a human cannonball named Hugo Zucchini, who objected to his entire act being depicted on the evening news without his permission. The Supreme Court, in that case, interestingly, held that he did have a right of publicity claim. The unauthorized taking of his entire act to show on the evening news without compensation to him. But the Supreme Court, unfortunately did not articulate a legal standard by which right of publicity claims involving other uses of one's identity could be evaluated out of the First Amendment in that decision. And so right now, with all due respect to the courts who have addressed the issue, I just don't think there is a set guideline. Personally, and no one has asked me before what my view is, but personally, I would argue that if one simply hires an actor to render a performance, to portray an individual, that should be perfectly fine under the First Amendment. And there have been several recent motion pictures which have done it, and that certainly doesn't violate the right of publicity, in my view. But when one uses AI or similar technology to make it appear that someone really is rendering a performance, or really is saying something, or really is doing something that they really didn't say or do, I believe that should be a run of publicity violation, especially if the use is made to sell a product in an advertisement or as an item of merchandise. But even in the entertainment space, I would argue that should not be permitted. That's a bridge too far, because you really are literally taking virtually all recognizable indicia of their identity and making it seem they did things they didn't do.

Pelham: Yeah, that makes sense. I mean, you're intentionally misleading people by misconstruing someone else's identity in order to portray a story or a narrative that that you did not consent to necessarily and might not have been wanting to, sort of, portray to the world. So that definitely adds in an interesting layer there that is something to contend with.

Lee: Yeah, and one can envision scenarios in which all manner of uses could be made. Not just in a major Hollywood production. Let's say hypothetically, someone took your image and made it appear that you were unfaithful in a romantic relationship, right? Well, that would be horrifying to most people. Let's see, they purport to show you accepting a bribe when you didn't accept a bribe. Imagine a politician is shown as taking a position that is actually the opposite of the position he or she really took in a political ad by an advocacy organization that is trying to promote a certain view. One can envision all manner of uses I think most of us would find objectionable if not horrifying. The current right of publicity law, I would submit, does not effectively prevent.

Pelham: I mean, that makes sense. The implications obviously stretch beyond just the realm of entertainment. There are many circumstances, as you mentioned, in which someone doesn't want artificial intelligence or any other technology telling a narrative that they didn't consent to. So it's sort of at an interesting point where, legally speaking, this needs to be addressed, because artificial intelligence and various other technologies are going to continue to proliferate and open the door for many other opportunities for that version of exploitation. So very interesting.

Lee: Yeah, you're absolutely right. The reality is it's going to happen. People will be able to do it technologically, and it will become easier and easier. And someone who has a grudge against you could use it against you or could use it against anyone, anyone who has a grudge against anyone else could use it against another person. So I personally, really do believe there is a need for some sort of effective legislation or legal remedy to stop that or it's going to happen and make it harder and harder for everyone to tell what is true and what isn't.

Pelham: There is a lot of good that can be associated with, you know, further opportunities for creativity that are unlocked by platforms like AI, but there have to be legal safeguards in place in order to make sure that it doesn't cross that line of compromising, you know, people's right to control how they are perceived and how they come across to the world, and not having artificial intelligence and technology sort of twist their words or twist their stances or perceptions in various contexts.

[music fades in, plays for ten seconds, and fades out]

Pelham: You mentioned, you know, the possibility of there being a form of legislation or policy that would be able to sort of act as that type of safeguard. We mentioned a little bit about the NO FAKES Act, and I'd like to talk about that a bit more, but first we'd love to give a little bit of context. So the NO FAKES Act 2023 or the nurture originals, foster art and keep entertainment safe act, perfectly titled, is a bipartisan bill, as I have read, introduced in the Senate in 2024 that would protect individuals against unauthorized AI generated replicas that use their voice and likeness, obviously, in this case, without their consent. I also know, as you mentioned, that if this were to be officially passed, it would be the first federally regulated right of publicity. What are your thoughts regarding whether or not this act would be enough to protect people from the exploitative risks that we've mentioned that AI possesses? You know, do you think this could make a sizable impact, or would you say that its scope would be a little bit too limited in order to have the kind of impact that we need to establish those safeguards?

Lee: I would say it would be an important first step, and it would be a great thing if it were enacted. I, candidly, am not optimistic it will be enacted. I do see a significant need you should know to give this some-- give you some background. There have been some efforts made to enact a federal right of publicity for over 20 years.

Pelham: Oh, wow.

Lee: The American Bar Association proposed a right of publicity bill, literally, in the late 90s, early 2000s that went nowhere. It was not introduced in Congress, and there was very little interest in doing so. The NO FAKES Act itself was introduced, as you pointed out, I believe, in the summer of 2024 amidst growing public concern over the risks posed by AI, it was inactive, but it went nowhere. It did not reach committee and effectively died at the end of the 2024 session, although it has been reintroduced as of a couple of weeks ago, so there is still some hope for it. Is the bill perfect, I'm not sure, but it's certainly-- it's rather limited to be candid, but it does attempt to prohibit the sort of digital imitation that I think is at the heart of the public's understandable concern about the risks of AI. Whether it will pass, I don't know. And if it does pass, whether it will pass judicial scrutiny is also unclear to me. You know, I mentioned before, several states do have legislation that seeks to prohibit the same sort of misconduct that the NO FAKES Act attempts to prohibit. At a state court level, California, specifically, has a provision that's right to publicity law and has a provision specifically prohibiting use of AI in the political advertising space.

Pelham: Oh, interesting.

Lee: But, just last October, a federal court in the Eastern District of California held that the California anti AI in political advertising Bill was unconstitutional because it restricted speech. Of course, that was the Kohls v. Bonta case, okay, with all due respect to the that court, I really don't think it analyzed the issue correctly, but nobody's asking me, and I don't represent anybody in the dispute, but that is the only decision on the books that I know of so far around the country evaluating these sorts of anti AI statutes. And so it will be interesting to see if, if either the NO FAKES Act is enacted, or if there is litigation concerning other state statutes that attempt to prohibit similar things, whether those will pass constitutional muster. I think there is a way to do it-- and I know what I would argue if I were representing a party attempting to enforce those laws-- but the ability to do so is not crystal clear to me at present.

Pelham: I mean, that makes sense. You do have this overarching precedent of the First Amendment, which is here, protecting people's ability to speak freely, but then at the same time you have that tension there that the right of publicity, specifically in regards to technology, presents. So yeah, it'll be interesting to see what precedent the courts eventually set in regards to how this operates in future court settings. So, very interesting context. Really appreciate you providing that. But just to move away from policy for a moment. I'd like to also sort of dive in a little bit further in terms of how we are talking about the courts and how that factors into this discussion. I know we mentioned this a little bit briefly, but is there a reason you think that we haven't seen many cases in the courts pertaining to the right of publicity in general?

Lee: I actually uh, I actually think there have been few of them for a couple reasons. One, one, as I said, these cases tend to be brought only by people who have what they view as a valuable asset, and who have the financial resources to protect those rights. For example, for several years, I worked on right of publicity cases on behalf of Elvis Presley Enterprises. The estate of Elvis Presley, from my humble perspective, did a magnificent job and really almost creating the right of publicity, at least making it a significant legal right. And it has done so because of the really remarkable fame of Elvis Presley, and because, from the estate's perspective, that was a very important way to protect Elvis Presley's legacy, right, and to ensure that it was honored going forward. So, so they had the financial resources, they had the interest, they had the desire, and they did that. Unfortunately, most of us, I mean, I hope this would never happen, Ashley, but if, heaven forbid, someone used your image in an ad without your permission, you might be upset about it, especially if you didn't like the product, right? But your ability to file suit to recover for their violation of your right of publicity would be difficult. Litigation is expensive.

Pelham: Definitely.

Lee: And it would, and it would be hard, a lot, sure, not impossible, for you to get effective legal relief. So there, as a result, there have not been a lot of cases about that. Also, I would say there's a greater intellectual property community and many intellectual property owners are zealous in protecting their rights. But the right of publicity is one in which one group of creators can be opposed to another group of creators, which is to say, traditionally, the Motion Picture Association, which represents the major motion picture studios, has opposed many right of publicity laws, and I believe, came out against the NO FAKES Act, for example. Why? Because movie producers would rather be able to use a celebrity's identity for free. Rather not have to pay for it, and so, they'll lobby against laws like that and try to discourage litigation by people to protect rights. Now, the Screen Actors Guild is an important guild on behalf of actors that works very hard to protect and promote their members’ rights of publicity to the extent possible. So there are competing interests involved, but that's part of the reason why. For many people, unfortunately, even if their rights are exploited, they just may not easily have the financial or legal resources to take action.

Pelham: Right. I mean, that makes sense. This is, as we've mentioned, going to court, having a trial, that does require a lot of time, effort and money and just a lot of emotional investment as well. And that's not necessarily something that is super accessible for the masses. So even if they do find themselves in a position where, you know, hopefully not, but their name, image or likeness are exploited, the remedies aren't necessarily easy for people that don't fall under celebrity or high wealth status to be able to take advantage of. So definitely something to contend with in the future, for sure. Another thing that you said that I honestly hadn't really thought of that much, though, is the tension within the entertainment industry that people find, having people within, you know, film production companies be opposed to, say, the NO FAKES Act, but also you have people that are specifically within, say, the music industry, that are very much for it and very much want to push it forward. So I mean, if you have an entertainment industry that is divided, would you say that that is inherently going to sort of slow down the progress that an act such as this one you know can make, or would we necessarily need to find like a unified act entertainment as a whole can sort of agree on in order to find success?

Lee: Certainly it would make passing legislation more difficult, because you will have stakeholders advocating on both sides of the issue, right and it's always easier to obtain legislative relief if everybody, or at least everyone on one side of an issue, all creators, for example, took the same position. When there are factual disputes, not factual disputes, but policy disputes amongst stakeholders, for understandable reasons, policy makers, legislators may be more reluctant to move legislation forward, even though, in the case of the NO FAKES Act or other similar legislation, that legislation would benefit everyone, every human being, every person and would afford legal protection to anyone from unauthorized use of AI, deep fakes or similar technology to exploit them without their permission. So but since every person is not lobbying Congress sometimes those broader, more diffuse interests don't have the same political clout on Capitol Hill than a lobbying organization might.

Pelham: Right. Yeah, the lobbying organization has, you know, sort of a one track mind in terms of furthering their specific stance. So I can see how you know, something that might be in everyone's best interest might not necessarily have the same sort of force and power, and, you know, as opposed to a specified interest. So that makes, makes sense, whether or not that's good or bad can be, you know, debatable, but that definitely makes sense. Okay, so since there is a great deal of uncertainty here in terms of what the future will hold in regards to how this will play out in the courts, and how you know, legislation will begin to form around the right of publicity and AI, do you think it would be useful for people, specifically creatives, to begin exploring other methods of protecting their NIL? And if so, I mean, what would those be? I know you talked about, you know, defamation, but are there circumstances under which trademark or anything else would also be useful for people?

Lee: Well, one can consider various legal disciplines, and in a particular circumstance, any one might apply right? The problem is that many very foreseeable uses of AI deep fakes won't qualify as defamatory, which is a separate body of law that has many restrictions itself to prevent violation of free speech rights, and there might be circumstances in which an AI generated image could violate trademark law, for example, or constitute what's called false designation of origin. But the truth is, many circumstances will not but might still be a violation that we would recognize as a violation of name, image, and likeness rights. For example, let's say someone read an ad, ran an ad without Lady Gaga permission, that says, “Lady Gaga endorses our product.”

Pelham: Oh, that's awesome.

Lee: Well, that's a that's a classic Lanham Act, unfair competition, false designation of origin, if in fact, Lady Gaga had not endorsed the product. Well, let's suppose the ad said something else. What if the ad said, “Lady Gaga does not endorse our product, but she should.” Well, does that violate trademark law? No, it doesn't, because anyone reading it would recognize that Lady Gaga does not endorse the product, but, but that use would still violate Lady Gaga’s right of publicity because someone is using Lady Gaga’s name to evoke her identity in the minds of readers and use that identity to attract attention to the product they are trying to sell. So that is a classic right of publicity violation situation. But, but the reality is, all laws individuals now use to protect themselves from unauthorized commercial exploitation can work sometimes in the AI setting, but, in many others, may not. The only other thing I know that certain groups are trying-- I mentioned before the Screen Actors Guild, right? There's also the Writers Guild and various other guilds for creative individuals.

Pelham: Of course.

Lee: They are all negotiating, or have negotiated, and are continuing to negotiate over AI rights in motion pictures and music and screenplays and various other things, and they are attempting to negotiate for themselves protections against various kinds of unauthorized uses of their identities or their work product through those negotiations. And they've achieved limited success, I would call it, very limited, but they've made some success in those efforts, and they're going to continue to do so-- I'm positive-- going forward. The problem is that they probably won't be able to negotiate rights that they would consider adequate-- number one. Even worse, most of us are not members of those guilds.

Pelham: Right.

Lee: So we won't get the benefit of whatever they're able to negotiate for their members.

Pelham: Definitely, that makes sense.

Lee: Which means to really get effective relief, we probably really do need legislation, and it probably really should be federal legislation to give everyone in the United States the rights needed to protect themselves. And as I said before, I personally don't think the NO FAKES Act is perfect, but I would view it as an important first step that would benefit everyone in protecting themselves against a future in which otherwise anyone will be able to make it look like they said or did anything in circumstances that they might not want.

Pelham: Right. Yeah, I can definitely see how having, you know, the opportunity for sort of a technological free for all could result in lots of issues in the future. I mean, as we've seen many cases today where things have already started to sort of get out of hand. So, definitely agree that hopefully, whether that be the NO FAKES Act or something that more people can agree on, we start to see some sort of legislation or policy that can begin to sort of start putting in those guardrails to protect people.

[music fades in and plays for fifteen seconds, then fades to low]

Pelham: I think that is a great place for us to end our conversation. Thank you so much, Mr. Lee, for joining us today on the podcast. We learned so much from you about the future landscape of the right of publicity as it relates to entertainment AI, and you know, the First Amendment andother topics. So we greatly appreciate you taking time out of your day to join us.

Lee: Oh, you're very welcome.

[music plays at full volume for ten seconds and fades out]

Edited by Jack McCormick

Ashley Pelham