Design as Conduct: Reframing Platform Architecture in Contemporary Tort Law
In October 2025, the City of New York, the New York City School District, and New York City Health + Hospitals filed a sweeping complaint alleging that major social media platforms deliberately engineered features that induce compulsive use among children and adolescents, thereby driving a citywide youth mental health crisis. The complaint describes platform architectures such as endless algorithmic feeds, intermittent variable rewards, persistent notifications, and weak age verification as design features purposefully calibrated to maximize engagement and advertising revenue. It seeks injunctive relief, abatement, and damages to address the public burdens these design choices have created. [1] These allegations reflect a central doctrinal question for tort law: when harm stems from a platform’s design rather than user-generated content, what avenues of liability remain available to plaintiffs? This article argues that contemporary litigation is increasingly reframing platform design itself as actionable conduct, prompting courts to reconsider how traditional tort doctrines apply to algorithmically engineered harms.
The NYC complaint grounds its primary claim in New York public nuisance doctrine. It frames the harms as collective and institutional: schools, hospitals, and public mental health systems have been compelled to expend resources to address the surge in youth behavioral and psychiatric needs attributed to platform design. Accordingly, the complaint seeks more than individual compensations; it demands abatement and institutional remediation on the theory that the platform’s architectures have created a “public nuisance” that materially impairs public welfare. This framing appears throughout the complaint’s introduction and causes of action and is explicit in the plaintiff’s demand that defendants “bear the burden of remedying their wrongs” rather than having the costs fall on New Yorkers. [2] By rooting liability in public nuisance, the complaint does more than invoke a familiar mass-tort framework; it repositions design itself as a source of collective harm that burdens public systems. This move is central to the emerging trend in design-based platform litigation as it allows plaintiffs to treat algorithmic architecture not as a conduit for speech, but as conduct capable of generating widespread, system-level harms that traditional content-based theories cannot reach.
Two key consequences follow from this posture. First, public nuisance theory permits municipal plaintiffs to seek systemic remedies such as injunctions, abatement orders, and damages for public costs that individual plaintiffs typically cannot obtain. Second, by situating the harm within public institutions, these complaints reframe what otherwise appear to be private psychological injuries as collective harms to public services. This shift makes traditional nuisance and negligence doctrines far more operationally relevant. The posture also reveals the strategic role that municipal plaintiffs increasingly play in design-based platform litigation: by drawing on institutional data and framing harms at the level of public systems, cities can articulate claims and seek remedies beyond the procedural and evidentiary reach of individual users. In this respect, the NYC filing aligns with a broader national trend in which public entities have become central actors in efforts to regulate engagement-driven platform design through tort law.
A key factual predicate of the complaint is its sustained emphasis on design rather than isolated user content. The filing repeatedly attributes harm to platform “design features” including an algorithmically-generated endless feed, intermittent variable rewards, trophies, social metrics that exploit social comparison, and inadequate parental controls, and describes these elements as intentionally engineered to increase engagement and drive advertising revenue. For example, the complaint alleges that platforms “deliberately embedded in their platforms an array of design features aimed at maximizing youth engagement to drive advertising revenue,” listing specific engagement-optimizing features that purportedly create obstacles to discontinue use. [3] In this framing, “design” encompasses not only discrete interface elements but also the broader platform architecture that structures user interaction and embeds incentive systems, underscoring the complaint’s claim that these features are deliberate and systematically harmful.
By framing harms as the product of platforms’ own design choices rather than user-generated content, these claims situate themselves squarely within an ongoing doctrinal debate over the scope of Section 230’s immunity. Section 230 of the Communications Decency Act has long been interpreted to shield platforms from liability for harms arising from user-generated content. Contemporary litigation, however, increasingly argues that this immunity does not extend to harms produced by a platform’s own architecture. [4] The distinction is meaningful: while courts have traditionally treated editorial decisions—whether to curate, distribute, or remove third party content—as protected, plaintiffs in design-based suits contend that recommendation algorithms and engagement-optimizing interfaces are engineered product features with foreseeable risks, not editorial judgments. [5]
This tension became particularly visible in Gonzalez v. Google LLC, where petitioners argued that YouTube’s recommendation system went beyond mere publication and materially contributed to harm by amplifying extremist content. [6] Although the Supreme Court ultimately avoided issuing a definitive ruling, the briefing and oral arguments exposed a fault line: whether algorithmic curation should be treated as conduct distinct from hosting content. In the absence of a clear doctrinal resolution, lower courts have diverged their analyses. Some treat algorithmic recommendations as categorically protected under Section 230 because they rely on third-party material; [7] others have been more willing to consider recommendation systems as product-like conduct outside the statutory immunity. [8]
Design-based litigation such as the NYC case implicitly tests the boundaries of Section 230 by framing engagement-maximizing architecture as a distinct source of harm. If courts accept that design, rather than content, is the operative cause, Section 230’s publisher-immunity framework becomes far less relevant. This helps explain why plaintiffs nationwide increasingly combine nuisance or negligence theories with design-focused factual allegations: they seek to craft a cause of action that survive Section 230’s expansive preemption by shifting the legal focus from speech to architecture. The resulting doctrinal split matters because it will determine whether courts interpret algorithmic design as protected editorial judgment or as actionable conduct, highlighting either the potential for a doctrinal shift that narrows Section 230 immunity or the persistence of disagreement that keeps the statute broadly protective.
A parallel, though conceptually distinct, pathway arises from product liability. Traditional product liability doctrine includes manufacturing defects, design defects, and failure to warn, however, these categories have historically been applicable to tangible consumer goods, not digital interfaces. [9] Yet as platforms increasingly resemble engineered products that shape user behavior, plaintiffs argue that engagement-optimizing features constitute “design defects” that render platforms unreasonably dangerous for foreseeable users, particularly minors. Applying this framework raises significant doctrinal stakes: courts must decide how to define the relevant “product,” whether intangible software constitutes a tangible good for purposes of liability, and how traditional risk-utility balancing and foreseeability analyses translate to algorithmic design. These issues carry significant practical implications, as recognizing digital interfaces as products could expose platforms to expansive legal risk and reshape incentives for software design and content moderation.
This theory has gained some traction in the ongoing multidistrict litigation. In In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation (MDL No. 3047), Plaintiffs allege that features such as infinite scroll, autoplay, algorithmic recommendation, and gamified feedback loops exploit vulnerabilities in adolescent neurobiology. They contend that these features are not incidental but represent deliberate engineering choices that materially increase the risk of compulsive use, sleep disruption, self-harm, or exposure to harmful challenges and content. [10] Unlike content-based harms, which stem from posts, videos, or messages created by other users, these alleged harms originate from the platforms’ own design decisions. This distinction strengthens plaintiff's claims by framing the platform itself as an actor whose choices foreseeably create risk, potentially circumventing Section 230 protections for third-party content.
Courts have been divided in their responses.. Some have dismissed claims on the ground that plaintiffs failed to identify a cognizable “product” or articulate a specific “defect” within the meaning of traditional product liability doctrine. In Bogard v. TikTok, for instance, the court declined to treat a social media platform as a product, reasoning that it is an intangible service intertwined with user expression. [11] But other courts within the MDL have permitted limited discovery, recognizing that modern platform architecture characterized by engineered behavioral effects,may not fit neatly within legacy distinctions between products and services. [12] These divergent rulings reveal a deeper doctrinal uncertainty: courts struggle to define what constitutes a “product” in digital environments, how risk-utility analysis should apply to software interfaces, and whether intangible, behaviorally engineered features can trigger traditional product liability. The absence of a coherent framework highlights both the conceptual strain and the transformative potential of applying traditional product liability to algorithmic design.
What emerges is an unsettled but evolving doctrinal landscape. Product liability offers a conceptual tool for addressing harmful design, but courts must grapple with defining the relevant product, identifying the defect, and determining whether risk-utility balancing applies to digital environments. Although the NYC complaint is not a product liability action it reinforces this trajectory by characterizing design choices as intentional, profit-driven, and foreseeably harmful. These are precisely the types of considerations that underlie design-defect analysis.
Taken together, these doctrinal pathways—public nuisance, design-focused challenges to Section 230 immunity, and product liability claims—create converging pressures on courts to reconceptualize platform architecture as actionable conduct rather than a mere conduit for speech. Public nuisance emphasizes systemic harms to institutions, Section 230 litigation interrogates the boundary between content and design; and product liability frames behavioral engineering as a design choice subject to risk utility analysis. Collectively, these approaches push courts toward recognizing deliberate engagement-maximizing design as a legally cognizable source of harm even though existing doctrines were not developed with digital environments in mind.
Across these doctrinal strands, courts and litigants increasingly converge on a shared conceptual insight: algorithmic harm arises not from isolated pieces of content but from the structure and incentives embedded in platform design. This recognition enables plaintiffs to leverage doctrines traditionally applied to physical or environmental harms and adapt them to digital contexts. Public nuisance frames algorithmic design as a harm to public systems; Section 230 debates center on the distinction between content and design; and product liability seeks to treat design choices as engineering decisions accountable to safety expectations.
Yet a persistent challenge across all three doctrines is establishing causation. Courts have grappled with the evidentiary difficulty of linking platform design to specific medical or psychological outcomes, especially in adolescents whose mental health is shaped by overlapping, social, environmental, and developmental influences Behavioral harms arise through complex interactions over time, mediated by individual choices, peer dynamics, and differential susceptibility. These complexities shape plaintiffs’ reliance on epidemiological and institutional data and contribute to judicial caution in expanding liability.
Taken together, these doctrines allow plaintiffs to construct a multi-pronged liability strategy: government plaintiffs can pursue nuisance and negligence claims for public harms; individual plaintiffs can seek recovery through design-defects theories; and both can attempt to circumvent Section 230 by grounding liability in platform architecture rather than speech. The NYC complaint exemplifies this convergence by anchoring liability squarely in design and situating itself within a growing body of cases that treat algorithmic systems as engineered environments with legally cognizable risks.
Despite the momentum behind these theories, significant doctrinal challenges remain. Causation requires plaintiffs to show that specific design choices, rather than broader social, psychological, or environmental factors, produced the alleged harms. Courts may be wary of attributing complex mental health outcomes to platform architecture in the absence of clear scientific consensus. Duty questions persist, complicated by long-standing assumptions about use autonomy and parental responsibility. Expanding public nuisance doctrine may raise questions and concerns about doctrinal overreach, echoing criticisms seen in opioid and lead paint litigation. Narrowing Section 230 through judicial reasoning may provoke First Amendment concerns, while broadening product liability to digital services risks erasing distinctions that historically structured tort law. Each doctrine has the potential to reshape an entire field, and courts may be reluctant to establish transformative precedent through tort litigation alone. Algorithmic design litigation represents one of the most consequential tests for tort law in the digital era. By reframing social media platforms as engineered environments with foreseeable behavioral consequences, plaintiffs seek to adapt public nuisance, product liability, and Section 230 jurisprudence in novel ways. The NYC complaint exemplifies how a design-centric approach may allow plaintiffs to bypass traditional content-based obstacles and instead target the profit-driven architectures that structure user behavior. Whether courts can adapt these doctrines without overextending their boundaries will shape the next decade of digital platform regulation and determine whether the law can meaningfully respond to the public health challenges posed by contemporary platform design.
Edited by Christina Park
[1] City of New York v. Meta Platforms, Inc., No. 1:25-cv-08282 (S.D.N.Y. Oct. 17, 2025).
[2] Ibid.
[3] Ibid.
[4] 47 U.S.C. § 230(c)(1).
[5] Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997); Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
[6] Gonzalez v. Google LLC, No. 21-1333 (U.S. May 18, 2023).
[7] Dyroff v. Ultimate Software Group, Inc., 934 F.3d 1093 (9th Cir. 2019).
[8] Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. 2021).
[9] Restatement (Third) of Torts: Products Liability §§1–2 (1998).
[10] In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, MDL No. 3047 (N.D. Cal. Oct. 15, 2024).
[11] Bogard v. TikTok Inc., 725 F. Supp. 3d 897 (S.D. Ind. Mar. 25, 2024).
[12] In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, MDL No. 3047 (N.D. Cal. Oct. 15, 2024).