Required Deepfake Labeling: Contradicting or Supporting The First Amendment?

In the roughly 235 years since the First Amendment was ratified, advanced AI has occupied only about two percent of that time, yet its implications for privacy, defamation and fraud are already profound. In these few short years, the rise of synthetic media and “deepfakes” have already prompted both the United States Congress and numerous state legislatures around the country to create laws that aim to limit their potential harm. Yet, these laws face a constitutional challenge — the First Amendment's protection of speech, which extends to content that may be false, manipulative, or offensive, unless it can be tied to causing legally-recognized, demonstrable harm. Deepfake regulation must more accurately target demonstrable harms like privacy invasion, defamation, and fraud, as accurate deepfake regulation is crucial to the upholding of First Amendment principles. Broad or content-based bans risk being struck down as unconstitutional, but narrowly tailored rules, such as factual labeling requirements paired with harm-focused enforcement, can lessen real-world harms without restricting protected speech.

Many proposed laws that target deepfakes fall into the broad, content-based category because they target specific types of media (such as individuals engaging in fabricated acts or inauthentic political speech). The First Amendment prohibits laws that regulate speech based solely on its content. In Reed v. Town of Gilbert (2015), the Supreme Court of the United States reaffirmed that when the government restricts speech purely because of what it depicts, the law is deemed “content-based” and is therefore subject to strict scrutiny, the highest level of judicial review. [1] Under United States v. Alvarez (2012), the Supreme Court struck down the Stolen Valor Act, which, signed into law by President Barack Obama in 2013, made it a federal crime to falsely claim military honors to gain money or other benefits. The four-Justice plurality opinion emphasized "that falsity alone may not suffice to bring the speech outside the First Amendment." As a result of the ruling, the opinion applied strict scrutiny to the Stolen Valor Act as a content-based law. Like the defendant in Alvarez, a deepfake creator can produce false content, but unless that content can be proven to have the potential to cause harm (such as fraud, defamation, etc.), it will remain protected under the First Amendment. [2] Similarly, in the 1997 decision Reno v. ACLU, the court made invalid the broad restrictions on online indecent content (restrictions originally put in place to protect minors from indecent content), as they found it unconstitutionally overboard. Both United States v. Alvarez (2012) and Reno v. ACLU (1997) indicate that the use of categorical bans on deepfakes would likely fail strict scrutiny, as they regulate expression based on its message rather than its effects.

Unlike categorical bans, disclosure requirements like labels and watermarks, when factual and non-ideological, may pass constitutional muster and not violate the First Amendment. [3] The Supreme Court’s decision in Zauderer v. Office of Disciplinary Counsel (1985) established that the government can implement actual, non-controversial disclosures if they are reasonably related to preventing consumer deception. Using this reasoning, the implementation of modest labeling requirements for material generated by artificial intelligence can increase transparency without sacrificing on constitutionally-protected expression. [4] Fact-based disclosures vastly differ from ideology-based enforcement, illustrated by the 2023 ruling, 303 Creative LLC v. Elenis, where the court ruled that state law cannot force a business owner to create a message that violates their "sincerely held beliefs", in this case, a website designer that refused to create same-sex wedding website. Unlike ideology-based mandates, which force individuals to express viewpoints that they may disagree with, fact-based markers simply provide neutral and truthful information. Labeling deepfake content does not compel a person to endorse a predetermined messaging, it only makes certain that an audience is exposed to truthful context in a media landscape that is increasingly victim to manipulation. In this sense, rules surrounding the disclosure of AI-generated material may reinforce, rather than contradict, the First Amendment’s core principles.

Existing legal categories such as privacy, defamation, and fraud have already captured the harms deepfake laws seek to address. [5] In New York Times v. Sullivan (1964), the Supreme Court ruled that public officials cannot win a defamation lawsuit unless they can prove false statements were made with "actual malice” which is defined as a knowing/reckless disregard for truth. The Court's ruling in Sullivan emphasized that the First Amendment does not permit the state to suppress false or misleading expression simply because it is incorrect or offensive. Instead, constitutional limits require the government to intertwine regulation with provable harm. This complicates modern proposals aimed at addressing deepfakes, which often seek to regulate falsity itself. This requirement exemplifies the fact that courts have already defined terms and tests encapsulating the intent to harm, underscoring that speech regulation must be linked to demonstrable harm instead of mere falsehood. Deepfakes that violate privacy, defraud individuals/groups, or damage reputations, fall well within these established exceptions; therefore, rather than creating entirely new speech regulations as a way to combat AI-driven harm, the legal system can adapt already existing doctrine to hold the malicious creators of deepfake content responsible. Overarching, content-based bans or vague prohibitions risk conflicting with protected speech, while harm-based regulation already aligns with both constitutional precedent and established enforcement.

The development of newly-emerging deepfake technology challenges our traditional understanding of expression and truth, though the First Amendment has proven to be resilient in the wake of new mediums of communication. Courts have required the government to give grounds for speech restrictions through illustrating demonstrable harms. As such, the path forward for the regulation of deepfake content is not through broad and extensive prohibitions, but through the application of already established, yet tailored, measures that are rooted in legal categories such as privacy, defamation, and fraud. Required factual disclosures (when true and non-idealogical) can complement, rather than contradict, the First Amendment; Constitutional basis is not the barrier to establishing new regulation to address media deception, but is the key to preserving freedom while avoiding genuine harm.

Edited by Jaci Walker and Cara Wreen

[1] United States v. Alvarez, 567 U.S. 709 (2012)

[2] Reno v. ACLU, 521 U.S. 844 (1997)

[3] Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985)

[4] 303 Creative LLC v. Elenis, 600 U.S. 570 (2023)

[5] New York Times Company v. Sullivan, 376 U.S. 254 (1964)