If It Bleeds, It Leads… to Liability Concerns?: Gonzalez v. Google, Online Terror, and the Need for Section 230 Reform

On November 13th, 2015, extremist terrorists targeted stadiums, concerts, and restaurants across Paris killing 130 and wounding 494 people, including an American student, Nohemi Gonzalez, who was studying at the Strate School of Design in Sèvres for a semester. Under the Antiterrorism Act (ATA), American citizens are given the right to sue for damages caused by acts of international terrorism, and the family of Gonzalez decided to exercise this right, though their suit takes a new approach to that form of liability. 

Gonzalez has argued that Youtube bears liability for radicalizing the Paris attackers through the platform’s recommendation algorithm, but the Ninth Circuit did not agree. The Court ruled on June 22nd, 2021 that under the protections of the Communications Decency Act (CDA), Google, the parent company of Youtube, could not be held liable for speech on their platform and dismissed Gonzalez’ claims. The Ninth Circuit was specifically asked to review Section 230(c)(1) of the CDA which reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This section, dubbed the “Good Samaritan” clause (and colloquially “the twenty-six words that created the internet’’), was part of the original act passed by Congress in 1996 and has not been revised since. Gonzalez’ case brings to light the drastic changes that have occurred within the internet industry over the past 27 years and the acute inadequacy of laws passed for a different internet from a different time. The Supreme Court’s ruling in this case will have enormous ramifications for the protections that internet platforms like Youtube receive. However, if the Supreme Court sides with Gonzalez, this will not be enough; legislative action is required to fully respond to the crisis we face online.

When Section 230 was originally passed by Congress, the internet was facing a much different crisis than the one the Act has fostered today. In 1996, Congress had been pushed to provide protection by the New York Supreme Court’s ruling on the case Stratton Oakmont v. Prodigy Services Co. for the fledgling internet platforms. The case found that because Prodigy had the “technology and manpower” to delete offensive and distasteful posts, they could be considered publishers of that content – more akin to a newspaper than an online platform. Congress took note of this and realized that either the industry would fail as it expended enormous resources to curate every post on their platforms or the platforms would become cesspools of the worst that the unedited internet had to offer. Thus, they decided to press forward with the Section 230 protections, allowing the internet platforms a special status in which they could curate content posted by third parties without the negative liability ramifications that would arise from classification as a publisher.

Fast forward to 2022 and Gonzalez has argued that the liability shield raised by Congress nearly three decades ago has been surmounted by the targeted recommendation used by platforms to boost the amount of time users spend on their sites. The Ninth Circuit ruled against this aspect of Gonzalez’ argument in particular on the basis that if Google had treated the third-party content posted by terrorist propagandists as it would treat any other third-party content, then the neutrality of the platform was maintained and it was not acting as a publisher. However, the District Court’s argument is undermined by the actual treatment of content on the platform. From a look at the supposedly neutral algorithm’s treatment of other similarly motivated hate-speech (e.g. alt-right creators), it is clear to see that neutrality cannot serve as a deciding factor in the Court’s ultimate decision. While the idea of the so-called “Youtube rabbit hole” has been largely disproven, there still remains the risk that those who hold beliefs outside the Overton window can have their views hardened by consistent recommendations of radical content. The goal of Youtube’s algorithm is to keep people on the website as long as possible, and outrageous and inflammatory content, of course, does exactly that. Even though Youtube has taken steps since 2015 to help the issue of grossly offensive content and hate speech on its website, the changes have been applied largely sporadically and with little success. It was just five years ago that a video describing the Parkland shooting victims as crisis actors went viral in 2018 and wound up on the Youtube trending page. The algorithm itself is not neutral, but helps to achieve the goals of its parent company, namely profits through users' continued time on the website. Therefore, the Supreme Court would do well to consider the algorithm itself as the publisher in this case. In order to account for the potentially terrible consequences of its bias to profit, the Court should accept the arguments put forth by Gonzalez and restrict the ability of tech behemoths to hide behind their own algorithms while claiming to be neutral. 

The issue should not stop with this Court’s ruling. The companies employing these algorithms are no longer the garage start-ups that Congress sought to protect in 1996. They are the biggest companies in the world, due in large part to the enormous protection provided by Section 230. A bipartisan array of legislators have called for a reform to Section 230 on those grounds. The current bill in Congress, the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms Act (SAFE TECH), would provide much needed first steps to reforming the CDA for good. It would allow for cases like Gonzalez’ to have ample legal standing for injunctive relief based on wrongful deaths caused by a platform’s inaction to restrict content that could cause “irreparable harm.” The Department of Justice provides further recommendations for reform of Section 230 of which Congress now has the ability to vote on the restrictions of cyberstalking, child abuse, and wrongful death clauses included in the SAFE TECH Act. In addition to pressuring companies to act on content moderation, the DOJ also advocates for the open ended wording of Section 230 to be made more explicit to reign in accusations of censorship by big tech. They propose replacing the catch-all “otherwise objectionable” language in Section 230(c)(2) with “unlawful” and “promotes terrorism” which would specify precisely what language the companies are permitted to censor, leaving no ambiguity in a platform’s ability to restrict the radicalizing messages central to the Gonzalez case. The action put forward by both Congress and the DOJ demonstrate that this issue is reaching its breaking point and must be addressed. While the Supreme Court should rule in favor of Gonzalez based on the algorithm’s behavior as a publisher, targeted legislative reform is necessary to achieve any permanent justice on this issue. 

Edited by: Adam Kinder

Ryan Bolin