Original or Stolen? The Battle Between AI Image Generators and Visual Artists

Generative AI has become an increasingly popular tool among artists, museums, and even corporations seeking to create more appealing advertisements. It is a powerful program for creatives to maximize efficiency and to translate their artistic vision into a final product that surpasses the abilities of the human hand. Art produced by AI has undoubtedly found its place in major art institutions: in late 2022, for example, the Museum of Modern Art in New York City (the MoMA) held an exhibition called Unsupervised featuring a machine-learning model. Artist Refik Anadol trained the model to draw on all of MoMA’s publicly available works in creating its own interpretation of the museum’s history, providing an immersive experience for the audience. [1] Generative AI is also used for marketing: Coca-Cola became the subject of controversy last holiday season for its AI-produced Christmas advertisement, though it is among several companies like BMW and Heinz that rely on the technology. Generative AI has undeniable benefits when our imaginations stretch beyond human capability, but we fail to consider its unethical training process and the artists left most vulnerable by its popularity. Its widespread use harms those whose work is exploited and replicated by these machines without their consent and protection. This sinister side of generative AI is central to the ongoing case Andersen v. Stability AI from the Northern District of California. 

Andersen v. Stability AI has its early origins in a 2022 New York Times essay by cartoonist Sarah Andersen. As her drawings gained popularity early on in her career, people began re-drawing her work and distorting it into right-wing political messaging. AI made its way in, and Andersen was made aware that typing her name into any program yielded work that replicated her unique style. [2] In January 2023, Andersen, alongside nine other visual artists, filed a class action lawsuit against AI companies Stability AI, MidJourney, and Deviant Art for their creation of and reliance on Stable Diffusion, a generative AI model that is trained on an expansive dataset of existing art. The dataset is developed by LAION, a nonprofit that conducts academic research and therefore has legal access to the 5 billion images it scraped from the internet, including the plaintiffs’ copyrighted work. [3] In an Amended Complaint filed in late 2023, the artists added Runway AI as a defendant and accused the corporations of direct and indirect copyright infringement because they used copies of registered artwork to train their AI. [4] They were also accused of violating the Digital Millennium Copyright Act of 1998 by altering or removing “copyright management information (CMI),” which includes any information that identifies the artists as the copyright owners. [5] As a result, they are prevented from knowing when their work is reproduced by AI, and it becomes more difficult for them to prove the generated art is derivative of their work.

In August 2024, Judge Orrick dismissed many claims against the defendants but found a few key allegations plausible, including direct copyright infringement. He explained that the plaintiffs’ model infringement theory, which argues that the Stable Diffusion model “represents a transformation of [their] artwork,” and the distribution theory, which argues that distributing Stable Diffusion has the same effect as distributing their artwork, depend on how the artwork is contained in Stable Diffusion. If it holds a copy of the original work, even in algorithmic or mathematical form, it could be an infringing model. [6] To support their claims, the plaintiffs cited a statement by Stability’s CEO that Stable Diffusion uses compressed versions of the training images and can “recreate” those images. In addition, they referred to academic papers that underscore the CEO’s words and demonstrated that the model produces outputs very similar to their artworks when their names are used in the prompts. Based on this evidence, Judge Orrick also allowed a claim for inducement of copyright infringement to go forward, meaning Stability may be guilty of encouraging users to infringe. [7] Regarding the violation of the DMCA, the Judge found that because Stable Diffusion has not been shown to produce work identical to the training images, the defendants cannot be at fault for altering or removing CMI, which is consistent with a precedent from the same district. [8]

Judge Orrick clarified that previous copyright cases and generative AI cases will not be of use as Andersen v. Stability AI moves into the discovery stage. He stated that “run of the mill” cases that determine the similarity between two works are not applicable when an AI model is both trained on copyrighted work and “invokes” that work. [9] The outcome of each case is also specific to the AI model in question because each one functions differently. For example, one defendant cites Kadrey v. Meta Platforms, Inc., which considered copyright infringement by large language models (LLaMAs) trained on written texts to create text outputs. The ruling did not find Meta guilty due to insufficient allegations by the plaintiffs, but text generators fundamentally differ from the image generator in question. According to Judge Orrick, infringement depends on “what the evidence shows concerning how [image generators] operate and, presumably, whether and what the products can produce substantially similar outputs as a result of ‘overtraining’ on specific images or by design.” [10] He reinforces that the outcome of Andersen v. Stability AI hinges on the unique mechanics of the models themselves. AI’s ability to replicate and build upon the styles of visual artists is not enough to prove infringement; without the technical information about how the models process art, the plaintiffs’ claims are unsubstantiated. AI image generators therefore constitute a new territory that tests the limits of artists’ copyright protection.

Though several of the plaintiffs’ claims against the defendants were dismissed, there is consensus among the art world that the most recent ruling represents a positive step forward for visual artists left vulnerable by generative AI. As the plaintiffs’ Complaint points out, their careers suffer from the exploitation of their work in AI models. They write, “The value of Plaintiffs’ name recognition– and thus the value of their art itself– is diluted in a market flooded with AI-generated copies associated with Plaintiffs’ names and artistic styles.” [11] We must keep in mind that generative AI outputs consist of elements of individual people’s art, meaning artists are forced to compete with, in some sense, versions of their own work. The specific process by which AI models break down artwork, and the possibility that they do not produce anything “substantially similar” to the training images, should not overpower the fact that they can replicate the unique styles that artists have built their careers on.  Andersen v. Stability AI may also have consequences for the future of the art world. If, for example, AI generators have unrestricted access to images online and their outputs can incorporate those images, AI art will overwhelm and discourage the creation of new, authentic work. Further, if this cycle continues, AI art will begin to train new models, diminishing creativity as a whole. [12] While AI can be a useful tool, it should not become a replacement for the thought and dedication that goes into creating meaningful art- art that an audience can connect with. However, depending on the outcome of Andersen v. Stability AI and similar cases that consider image generators, the future of art may fall under the control of large corporations including the defendants. [13]

A ruling that finds the defendants guilty of copyright infringement could have implications for the millions of third parties who have taken advantage of models like Stable Diffusion, even if they had no role in the training process. It would also raise questions about the future of the hundreds or even thousands of image generators circulating today that rely on protected art. Training new AI models could become significantly more time-consuming and expensive without access to copyrighted work; however, copyright law may provide some pathways forward. As Judge Orrick’s August ruling demonstrates, an important first step for AI corporations would be transparency, specifically in terms of which artworks the models utilize and how they are deconstructed into pixels and then reassembled in the output. The artists themselves could obtain licenses to be compensated for the use of their work. Another solution is developing a watermark specific to copyrighted works that internet scrapers can recognize, so a work can be left out of the dataset entirely if its artist desires. [14] Stability AI has actually given artists the option to opt out of being included in future generators, but as a Harvard Business Review article points out, this move puts the burden on the artists to protect their work. [15] Copyrighted art should not be used for training by default, and artists should choose if they want their work included to begin with. 

Andersen v. Stability AI plays a crucial role in raising awareness about the threats that AI image generators pose to visual artists. As the case moves into discovery, the validity of the artists’ claims will rest on the technical aspects of these models, especially the format in which the training images are stored and how they are employed. However, focusing primarily on such details could overshadow the larger consequences for artists– their work is taken and copied in some manner without their consent, and as such, the priority should be protecting their careers. No matter the outcome of the case, it will be formative in determining what further steps artists can and cannot take to safeguard their work and what protections copyright law affords them.

Edited by Begum Gokmen

[1] “Refik Anadol,” MoMA, https://www.moma.org/calendar/exhibitions/5535.

[2] Sarah Andersen, “The Alt-Right Manipulated My Comic. Then A.I. Claimed It,” New York Times, December 31, 2022, https://www.nytimes.com/2022/12/31/opinion/sarah-andersen-how-algorithim-took-my-work.html; Zach Schor, “Andersen v. Stability AI: The Landmark Case Unpacking the Copyright Risks of AI Image Generators,” NYU Journal of Intellectual Property and Entertainment Law, December 2, 2024, https://jipel.law.nyu.edu/andersen-v-stability-ai-the-landmark-case-unpacking-the-copyright-risks-of-ai-image-generators/

[3] Compl., Andersen v. Stability AI Ltd., 3:23-cv-00201-WHO, (N.D.C.A., 2023-1-13), https://storage.courtlistener.com/recap/gov.uscourts.cand.407208/gov.uscourts.cand.407208.1.0.pdf; Andersen, “The Alt-Right Manipulated.”

[4] Amended Compl., Andersen v. Stability AI Ltd., 3:23-cv-00201-WHO, (N.D.C.A., 2023-11-29), https://www.courtlistener.com/docket/66732129/129/andersen-v-stability-ai-ltd/

[5] Digital Millennium Copyright Act, H.R.2281, 105th Cong. (1998).

[6] Order on Mot. to Dismiss, 16, Andersen v. Stability AI Ltd., 23-cv-00201-WHO, (N.D. Cal. Aug. 12, 2024), https://www.courtlistener.com/docket/66732129/223/andersen-v-stability-ai-ltd/; Schor, “Andersen v. Stability AI.”

[7] Order on Mot. to Dismiss, 9, 17, Andersen v. Stability AI Ltd., 23-cv-00201-WHO, (N.D. Cal. Aug. 12, 2024), https://www.courtlistener.com/docket/66732129/223/andersen-v-stability-ai-ltd/; Schor, “Andersen v. Stability AI.”

[8] Order on Mot. to Dismiss, 13, Andersen v. Stability AI Ltd., 23-cv-00201-WHO, (N.D. Cal. Aug. 12, 2024), https://www.courtlistener.com/docket/66732129/223/andersen-v-stability-ai-ltd/; Kevin Madigan, “Top Takeaways from Order in the Andersen v. Stability AI Copyright Case,” Copyright Alliance, August 29, 2024, https://copyrightalliance.org/andersen-v-stability-ai-copyright-case/

[9] Order on Mot. to Dismiss, 17, Andersen v. Stability AI Ltd., 23-cv-00201-WHO, (N.D. Cal. Aug. 12, 2024), https://www.courtlistener.com/docket/66732129/223/andersen-v-stability-ai-ltd/; Madigan, “Top Takeaways from Order.”

[10] Order on Mot. to Dismiss, 18, Andersen v. Stability AI Ltd., 23-cv-00201-WHO, (N.D. Cal. Aug. 12, 2024), https://www.courtlistener.com/docket/66732129/223/andersen-v-stability-ai-ltd/; Madigan, “Top Takeaways from Order.”

[11] Compl. ¶212, Andersen v. Stability AI Ltd., 3:23-cv-00201-WHO, (N.D.C.A., 2023-1-13), https://storage.courtlistener.com/recap/gov.uscourts.cand.407208/gov.uscourts.cand.407208.1.0.pdf

[12] Matt Corrall, “The Harm and Hypocrisy of AI Art,” Corrall Design, 2024, https://www.corralldesign.com/writing/ai-harm-hypocrisy#:~:text=If%20AI%20can%20just%20ignore,AI%20in%20the%20first%20place.&text=Another%20way%20artists%20can%20protect,worlds%20struggle%20to%20catch%20up

[13] Corrall, “The Harm and Hypocrisy.” 

[14] Matthew Lindberg, “Applying Current Copyright Law to Artificial Intelligence Image Generators in the Context of Anderson v. Stability AI, Ltd.,” Cybaris 15, no. 1 (2024): 58-59. https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1115&context=cybaris

[15] Gil Appel, Juliana Neelbauer, and David A. Schweidel, “Generative AI Has an Intellectual Property Problem,” Harvard Business Review, April 7, 2023, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.

Aina Puri