When AI Talks Back: A New Liability Approach for Chatbot-Induced Mental Health Crises

Ever since their inception, learning language models (LLMs) have been a source of both optimism and opposition, fueling debates on how they should be governed. While supporters frame them as the next step in a long line of new technologies, critics warn of their far-reaching influence and insufficient oversight. Privately run companies that create and operate LLMs, such as OpenAI, DeepSeek, and Claude, have been remarkably difficult to regulate, leading to concerns from parents, educators, and policymakers alike. To understand why, it’s necessary to look at the legal framework that has long governed online platforms, starting with Section 230.

Since the establishment of the modern-day internet, one policy has largely governed the online sphere. Section 230, a provision of the 1996 Communications Decency Act, was created to protect online platforms from liability for content posted by their users. It differentiates online spaces, like social media platforms, and news formats. Section 230 determines that newspapers are publishers and can be accountable for the information they produce and distribute. Social media platforms, on the other hand, are treated as neutral hosts and not held liable for user-generated content.

While Section 230 has historically shielded internet platforms from liability over user-generated content, chatbots like ChatGPT occupy a new gray area: they generate content rather than merely host it. Because of this, courts and policymakers should not automatically extend Section 230 protections to generative AI companies. Instead, a new regulatory framework is needed—one that differentiates between hosting and autonomous creation—to ensure accountability in cases where AI outputs contribute to harm and in the most devastating outcomes, even suicide.

For AI chatbots, unlike newspapers or social media platforms, the path toward Section 230 immunity is already narrowing, with growing agreement that they won’t fall under its protection. Several measures have been taken to ensure this: In 2023, Senator Josh Hawley introduced a bill to exclude generative AI systems from Section 230 protections, while 230’s authors, Senator Ron Wyden and former Representative Chris Cox, affirmed that the law does not extend to AI-generated content.

However, despite the confirmation by Section 230’s authors, the provision has yet to be officially revised. Consequently, several cases have emerged regarding the applicability of Section 230 immunity in the context of AI-generated content, usually hinging on how much control or authorship the platform exercises over their output. One such case is Gonzalez v. Google, in which the family of an ISIS terrorist attack victim alleged that YouTube’s recommendation algorithms helped promote extremist content. Although the case focused on algorithmic amplification, its implications extend to generative AI: it raises questions about when technology companies become active contributors to content, rather than passive intermediaries.

During oral arguments, the U.S. Supreme Court suggested that generative AI systems might always count as “information content providers.” Justice Gorsuch noted:

“In a post-algorithm world, artificial intelligence can generate some forms of content, even according to neutral rules. I mean, artificial intelligence generates poetry, it generates polemics today. That—that would be content that goes beyond picking, choosing, analyzing, or digesting content.”

If this interpretation holds, AI-generated outputs could be definitively excluded from Section 230 protections due to the statute’s limitation of immunity to entities that merely host third-party material, rather than generate the very speech that gives rise to liability. Taken together, these signals suggest that courts may soon treat generative AI not as a neutral platform, but as a creator of speech––placing it squarely outside Section 230’s protective scope.

The uncertainty surrounding Section 230’s application becomes even more pressing when AI interacts directly with users in sensitive contexts—such as mental health. This concern is magnified by the reality that many young people go to the internet to seek information about suicide. In fact, in the earliest days of the internet, bulletin boards featured suicide discussion groups, still available via archive today. Google, like Facebook, Instagram, or Tiktok, can display this content due to the immunity Section 230 provides them. The real-world impacts, namely suicide and self-harm, that occur from the use of generative AI bots has led to a more urgent movement developing against Section 230 protecting LLMs and their developer companies.

A multitude of lawsuits are beginning to challenge the legal ambiguity around chatbots, exploring whether companies behind generative AI systems can be held liable when their models produce harmful or dangerous outputs. In several high-profile cases, families have alleged that chatbot interactions contributed to self-harm or suicide by providing encouragement, means, or detailed methods. These claims challenge the long-standing assumption that technology companies are merely intermediaries, arguing instead that generative AI functions as an active participant in the exchange.

Most recently, in September 2025, the Social Media Victims Law Center filed three lawsuits on behalf of minors who either died by suicide or were sexually exploited after interactions with the AI platform Character.AI. The complaints were brought on behalf of 13-year-old Juliana Peralta of Thornton, Colorado, who died by suicide in November 2023; a 15-year-old identified as “Nina” from Saratoga County, New York; and a 13-year-old identified as “T.S.” from Larimer County, Colorado. The suits allege that Character.AI’s chatbots are “defective and dangerous by design,” programmed to mimic human behavior through the use of emojis, typos, and emotionally charged language that fosters dependency and emotional manipulation. Moreover, the suit claims that the platform exploited children’s trust and curiosity through the use of familiar characters from franchises like Harry Potter, Marvel, and popular anime, to expose minors to sexually explicit content, encourage isolation from loved ones, and in some cases, contribute to self-harm. 

Although these suits are still pending, on November 6, 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven new lawsuits in California state courts, accusing ChatGPT of “emotional manipulation, supercharging AI delusions, and acting as a ‘suicide coach’”. California lawmakers have made attempts to remedy this issue; in early October, the legislature passed a bill imposing stringent safety restrictions on AI chatbots used by children. Governor Gavin Newsom ultimately vetoed the measure, expressing concern that the restrictions were too broadly written. He did, however, sign a separate law requiring platforms to display a pop-up notification every three hours reminding minors that they are interacting with a chatbot rather than a human. 

Together, these cases and legislative efforts underscore the growing concern that generative AI systems are not passive tools but active conversational agents capable of influencing vulnerable users in profound ways. As such lawsuits unfold, the public will find out how liability is assigned when AI outputs contribute to mental health crises or suicide; for now, the legal landscape remains unsettled, revealing a pressing need for clearer standards of accountability for autonomous, human-like technologies. 

Thus, we have seen that Section 230 fails to account for the unique risks posed by generative AI. The solution, then, is for lawmakers to construct new frameworks that preserve technological innovation while ensuring accountability. Because chatbots generate responses rather than merely host user content, they must be placed into a separate category. 

These LLMs often draw on vast datasets to simulate empathy or companionship, which is what leads to real-world harm like encouraging suicide. The current absence of regulatory clarity leaves victims’ families without recourse or closure. A reasonable solution lies in a middle ground: adapting elements of product liability to AI. Just as car manufacturers are responsible for faulty design, AI developers could be required to demonstrate “reasonable safety” through transparent training data, moderation protocols, and human-in-the-loop review systems. Policymakers might also establish an “AI duty of care,” mandating that companies proactively prevent foreseeable harm. Just like how toasters and microwaves undergo rigorous testing and new automobile models must meet exacting safety standards, there must be a standard for LLMs as well. Building clear standards now is the only way to ensure that AI advances responsibly, protecting innovation without sacrificing human lives.

Dasha Smirnova