Original or Stolen? The Battle Between AI Image Generators and Visual Artists

Generative AI has become an increasingly popular tool among artists, museums, and even corporations seeking to create more appealing advertisements. It is a powerful program for creatives to maximize efficiency and to translate their artistic vision into a final product that surpasses the abilities of the human hand. Art produced by AI has undoubtedly found its place in major art institutions: in late 2022, for example, the Museum of Modern Art in New York City (the MoMA) held an exhibition called Unsupervised featuring a machine-learning model. Artist Refik Anadol trained the model to draw on all of MoMA’s publicly available works in creating its own interpretation of the museum’s history, providing an immersive experience for the audience. [1] Generative AI is also used for marketing: Coca-Cola became the subject of controversy last holiday season for its AI-produced Christmas advertisement, though it is among several companies like BMW and Heinz that rely on the technology. Generative AI has undeniable benefits when our imaginations stretch beyond human capability, but we fail to consider its unethical training process and the artists left most vulnerable by its popularity. Its widespread use harms those whose work is exploited and replicated by these machines without their consent and protection. This sinister side of generative AI is central to the ongoing case Andersen v. Stability AI from the Northern District of California. 

Andersen v. Stability AI has its early origins in a 2022 New York Times essay by cartoonist Sarah Andersen. As her drawings gained popularity early on in her career, people began re-drawing her work and distorting it into right-wing political messaging. AI made its way in, and Andersen was made aware that typing her name into any program yielded work that replicated her unique style. [2] In January 2023, Andersen filed a class action lawsuit against AI companies Stability AI, MidJourney, and Deviant Art alongside nine other visual artists for their use of Stable Diffusion. Stable Diffusion is a generative AI program that relies on the LAION dataset to produce art, and the plaintiffs alleged that LAION includes their artwork among 5 billion other images it has taken from the internet. According to Andersen, LAION is registered as a nonprofit that aids academic research, legalizing its possession of all these images found online. [3] In the lawsuit, the artists accused the corporations of direct and indirect copyright infringement because they used copyrighted work without a license and “passed those works off as original works by the artist whose name was used in the prompt.” [4] They also alleged the violation of the Digital Millennium Copyright Act of 1998, specifically by altering or removing “copyright management information (CMI),” which includes any information that identifies the artists as the copyright owners. [5] The artists argued that, as a result, they are prevented from knowing when their work is reproduced by AI, and it becomes impossible for them to prove the generated art is derivative of their work.

In August 2024, Judge Orrick dismissed all claims against Stability AI, MidJourney, and Deviant Art except for direct copyright infringement against Stability AI. He explained that the plaintiffs’ model infringement theory, which argues that the Stable Diffusion model itself infringes on their artwork because it “represents a transformation of the artwork,” and the distribution theory, which argues that distributing Stable Diffusion has the same effect as distributing their artwork, depend on how Stable Diffusion processes the artwork. [6] If it is determined that the algorithmic representation of the original artwork in Stable Diffusion is the same work but in a different medium (rather than a separate entity altogether), this would imply infringement by Stable Diffusion. The Judge also allowed a claim for inducement of copyright infringement against Stability AI to go forward, meaning Stability allows others to violate copyright. The plaintiffs argued that the program intentionally reproduces training images, citing the CEO’s statement that Stability can “recreate” any training image and similar arguments made by academic articles. Regarding the violation of DMCA, Judge Orrick found that because Stable Diffusion does not produce work identical to the training images, the defendants could not have “altered” or “removed” CMI, which is consistent with a precedent from the same district. [7]

Judge Orrick clarified that previous copyright cases and generative AI cases will not be of use as the case moves into the discovery stage. He stated that “run of the mill” cases that determine the similarity between two works are not applicable when an AI model is both trained by copyrighted work and “invokes” that work. The outcome of each case is also specific to the AI model in question because each one functions differently. For example, one defendant cites Kadrey v. Meta Platforms, Inc., which considered copyright infringement by large language models (LLaMAs) trained by written texts to create text outputs. The ruling did not find Meta guilty, but text generators fundamentally differ from the image generator in question. According to Judge Orrick, infringement depends on “what the evidence shows concerning how [image generators] operate and, presumably, whether and what the products can produce substantially similar outputs….” [8] He reinforces that the outcome of Andersen vs. Stability hinges on the unique mechanics of the models themselves. AI’s ability to replicate and expand on the unique styles of visual artists is not enough to prove infringement; without the technical information about how the models process art, the plaintiffs’ claims are unsubstantiated. AI image generators therefore, constitute new territory that tests the limits of artists’ copyright protection.

Though several of the plaintiffs’ claims against the defendants were dismissed, there is consensus among the art world that the most recent ruling represents a positive step forward for visual artists left vulnerable by generative AI. As the plaintiffs’ complaint points out, their careers suffer from the exploitation of their work in AI models. They write, “The value of Plaintiffs’ name recognition– and thus the value of their art itself– is diluted in a market flooded with AI-generated copies associated with Plaintiffs’ names and artistic styles.” [9] We must keep in mind that the output of generative AI is a mix of individual people’s art, meaning artists are forced to compete with, in some sense, versions of their own work. The specific process by which AI models break down artwork, and the possibility that they do not produce anything “substantially similar” to the training images, should not overpower the fact that they replicate the unique styles that artists have built their careers on. The questions raised by Andersen v. Stability also have consequences for what the future of art looks like. If, for example, the datasets that train AI image generators have unrestricted access to images online, the reproduction of those images will overwhelm new, truly authentic artwork. Further, if this cycle continues, AI-generated work will begin to train new datasets, diminishing creativity as a whole. While AI can be a useful tool, it should not become a replacement for the thought and dedication that goes into creating meaningful art- art that an audience can connect with. However, depending on the outcome of Andersen v. Stability and similar cases that consider image generators, the future of art may fall under the control of large corporations including the defendants. [10]

A ruling that finds the defendants guilty of copyright infringement could have implications for the millions of third parties who have taken advantage of models like Stable Diffusion, even if they had no role in the training process. It would also raise questions about the future of the hundreds or even thousands of image generators circulating today that rely on protected art. Training new AI models could become significantly more time-consuming and expensive without access to copyrighted work; however, copyright law may provide some pathways forward. As Judge Orrick’s August ruling demonstrates, an important first step for AI corporations would be transparency, specifically in terms of which artworks the models utilize and how they are deconstructed into pixels and then reassembled in the output. The artists themselves could obtain licenses that allow them to be compensated for using their work. Another solution is developing a watermark specific to copyrighted works that internet scrapers can recognize, so the work can be left out of the dataset entirely if the artist desires. [11] Stability AI has actually given artists the option to opt out of being included in future generators, but as a Harvard Business Review article points out, this move puts the burden on the artists to protect their work. [12] Copyrighted art should not be used for training by default, and artists should choose if they want their work included to begin with. 

Andersen v. Stability AI plays a crucial role in raising awareness about the threats that AI image generators pose to visual artists. As the case moves into discovery, the validity of the artists’ claims will rest on the technical aspects of these models and specifically, whether their method of storing the training images constitutes a new piece of artwork. However, focusing primarily on such details threatens to overshadow the dangers image generators pose to artists. Their work is taken and copied in some manner without their consent, and as such, the priority should be protecting their careers. No matter the outcome of the case, it will be formative in determining what further steps artists can and cannot take to safeguard their work and whether they must look beyond copyright law.

Edited by Begum Gokmen

[1] “Refik Anadol,” MoMA, https://www.moma.org/calendar/exhibitions/5535 

[2] Sarah Andersen, “The Alt-Right Manipulated My Comic. Then A.I. Claimed It,” New York Times, December 31, 2022, https://www.nytimes.com/2022/12/31/opinion/sarah-andersen-how-algorithim-took-my-work.html

[3] Andersen, “The Alt-Right Manipulated My Comic. Then A.I. Claimed It.”

[4] Compl., Andersen v. Stability AI, 3:23-cv-00201, (N.D.C.A., 2024-08-12), https://storage.courtlistener.com/recap/gov.uscourts.cand.407208/gov.uscourts.cand.407208.1.0.pdf

[5] Digital Millennium Copyright Act, H.R.2281, 105th Cong. (1998).

[6] Andersen v. Stability AI Ltd., 23-cv-00201-WHO (N.D. Cal. Aug. 12, 2024), https://perma.cc/U9VG-XRPV

[7] Compl., Andersen v. Stability AI, 3:23-cv-00201, (N.D.C.A., 2024-08-12); Andersen v. Stability AI Ltd., 23-cv-00201-WHO (N.D. Cal. Aug. 12, 2024), https://storage.courtlistener.com/recap/gov.uscourts.cand.407208/gov.uscourts.cand.407208.1.0.pdf 

[8] Andersen v. Stability AI Ltd., 23-cv-00201-WHO (N.D. Cal. Aug. 12, 2024), https://perma.cc/U9VG-XRPV

[9] Compl., Andersen v. Stability AI, 3:23-cv-00201, (N.D.C.A., 2024-08-12), https://storage.courtlistener.com/recap/gov.uscourts.cand.407208/gov.uscourts.cand.407208.1.0.pdf 

[10] Matt Corrall, “The Harm and Hypocrisy of AI Art,” Corrall Design, 2024, https://www.corralldesign.com/writing/ai-harm-hypocrisy#:~:text=If%20AI%20can%20just%20ignore,AI%20in%20the%20first%20place.&text=Another%20way%20artists%20can%20protect,worlds%20struggle%20to%20catch%20up

[11] Matthew Lindberg, “Applying Current Copyright Law to Artificial Intelligence Image
Generators in the Context of Anderson v. Stability AI, Ltd.,” Cybaris 15, no. 1 (2024): 59.https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1115&context=cybaris

[12] Gil Appel, Juliana Neelbauer, and David A. Schweidel, “Generative AI Has an Intellectual Property Problem,” Harvard Business Review, April 7, 2023, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.

Aina Puri