The Supreme Court and the Future of the Internet

Font Size:

Upcoming Supreme Court decisions may reshape the legal risks facing tech companies.

Font Size:

This term, the U.S. Supreme Court heard arguments in two cases that have the potential to remake the internet as we know it. Those cases, Gonzalez v. Google and Twitter v. Taamneh, address whether internet platforms can be held liable because their algorithms recommended content or because they provided a means of communication or recruiting connected to acts of international terrorism.

These cases will define the scope of platform immunity and responsibility for harms from speech on their platforms. And because platforms serve billions of users, the project of content moderation is vast and speech online—such as disinformation, hate speech, and efforts to destabilize democracy—now presents widespread and profound harms. Because the Court’s decisions may rejigger the legal parameters of tech responsibility, Gonzalez and Taamneh have the potential to remake the incentives, business models, and design possibilities that will structure the future of speech online. 

This essay lays out the stakes of those cases, their central arguments, and their broader implications. It also attempts to read some tea leaves, based on the over five hours of oral argument held in the cases, as to how the Court may, or may not, alter the future of the internet.

The first case, Gonzalez, involves the scope of Section 230 of the Communications Decency Act. Section 230 provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In plain language, Section 230 makes internet platforms—such as social media and search engines—immune from liability that would otherwise spring from their publication of others’ content. This includes liability for state law torts such as defamation, products liability, and negligence claims.

Passed by Congress in 1996, Section 230 is often described as “the twenty-six words that created the internet.” At its inception, Section 230 immunity shielded fledgling internet companies—a protection that was arguably critical to the flourishing of the internet and online speech that have defined the information age. But nearly 30 years after its passage, the provision has come under intense criticism. The internet’s nascent phase is long past. And critics argue that Section 230 immunity is no longer needed to protect the giant companies such as Google and Facebook that now dominate the internet. Rescinding or limiting Section 230, the critics maintain, would provide much needed incentives for platforms to better deal with harmful speech online.

Gonzalez was brought by the family of Nohemi Gonzalez, a 23-year-old American student who was killed in an ISIS attack in Paris while she was studying abroad. The plaintiffs argue that Google, the parent company of YouTube, aided and abetted the death of their daughter, in violation of Section 2333 of the Antiterrorism Act. Specifically, the plaintiffs allege that Google helped facilitate ISIS recruitment by recommending ISIS videos to YouTube users. Google contends that Section 230 shields it from that lawsuit.

The critical question in Gonzalez is whether Section 230 extends to a platform’s algorithmic organization, prioritization, and suggestion of content. For example, if you watch a cooking video on YouTube, it might offer you thumbnails and links to other cooking videos its algorithm predicts you might like, based on your past views and other data about you. Similarly, if you do a Google search, some links will be on the first page, and others on the 100th or 1,000th—increasing the likelihood that you click on the former rather than the latter. The Gonzalez family argues that these sorts of suggestions, targeted prioritizations, and efforts at amplification are created by the platforms, not by third party content creators, and so liability arising from them is not barred by Section 230.

The second case, Taamneh, was brought by the family of a Jordanian citizen killed in an ISIS attack in Istanbul. The family aims to hold Twitter, Facebook, and Google liable for aiding and abetting international terrorism under the federal Antiterrorism Act due to ISIS’s use of the companies’ platforms. Even though this case focuses on terrorism, it bears on the platforms’ potential liability for harms stemming from use of their platforms more broadly.

When the Supreme Court agreed to hear Gonzalez, many legal experts assumed that the Court planned to narrow Section 230 liability. Federal district and appellate courts were relatively consistent in treating Section 230 as a broad bar to liability for algorithmically mediated content. And strong cross-ideological criticism of the breadth of immunity and power enjoyed by the major tech platforms was gaining traction. Justice Clarence Thomas has even explicitly called on the Court to rein in Section 230 immunity.

Nevertheless, the oral arguments in Gonzalez and Taamneh suggest that the predicted outcome of a narrowing in Section 230 immunity is far less likely. The attorneys for the plaintiffs in both Gonzalez and Taamneh received a cool reception. Across ideological lines, the justices appeared skeptical of the argument that YouTube’s algorithmic ordering, prioritization, and suggestion of videos falls outside of Section 230 immunity.

The justices’ questions suggest two core concerns. First, they seemed concerned about whether it is possible to draw a line between the ways that YouTube’s algorithm and design choices suggest ISIS videos and every other way that a platform—by practice and necessity—organizes content. In Gonzalez, the justices repeatedly pressed the plaintiffs’ attorney for a way out of that line-drawing problem, but he provided none. Without such a distinction, a ruling for the plaintiffs would make Section 230 a dead letter and dramatically alter the economic underpinnings of the internet.

With respect to this line-drawing concern, multiple justices also sought a way to differentiate between an algorithm that intentionally prioritizes ISIS content or discriminated on the basis of race in job placement, on the one hand, and individualized targeting based upon interest and data, which aims to increase profits and usage and is the hallmark of today’s attention-based economy. There appeared to be majority support among the justices for distinguishing between an intentionally pro-ISIS algorithm and a customer-interest-targeting one that has the effect of prioritized ISIS content.

Second, several justices seemed to be exploring alternatives to deciding Gonzalez. They appeared skeptical that altering the scope of Section 230 was the Court’s job, rather than Congress’s. “I think that that’s my concern,” Justice Elena Kagan said to Gonzalez’s counsel. Her elaboration included one of the most memorable lines of this year’s arguments:

I can imagine a world where you’re right that none of this stuff gets protection. And, you know, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? A little bit unclear. On the other hand, I mean, we’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet … Maybe Congress should want that system, but isn’t that something for Congress to do, not the Court?

Suggesting another way out, Justice Amy Coney Barrett inquired if the Court could avoid the Section 230 issue if it ruled against the plaintiffs in Taamneh. But this potential off-ramp seemed less viable after the Taamneh arguments the following day, which appeared only to increase confusion and division among the justices, rather than to clarify their views.

In some sense, Taamneh is about the related question of the scope of proximate cause in the digital age—something of a modern Palsgraf v. Long Island Railroad, a prominant case addressing how forseeable a plaintiff’s injury must be, in a causal chain, for a defendant to be liable for causing it. Taamneh, in comparison, asks what level of involvement in enabling an act of international terrorism is required for a platform to be held liable. The ultimate question is what should limit secondary liability, especially “in a world in which we’re all and everything is interconnected, all acts touch on one another, there’s some butterfly effect anywhere,” as Justice Neil Gorsuch observed. As in Palsgraf, there exists some point at which the chain of causation, and so culpability, is too attenuated, and the hard question before the justices is how to assess the new sorts of links that platforms provide.

The argument in Taamneh centered on two issues—knowledge and substantiality of support.

The first issue is about the granularity of knowledge the platforms would need to have to be liable: Is it sufficient for a firm’s managers to know broadly that ISIS was using the firm’s platform and that ISIS is a group engaged in international terrorism? Or does a platform company need to know specifically that certain accounts or specific people were planning a terrorist attack?

The second issue relates to the extent and effect of the platforms’ involvement:  How substantially do a platform’s services need to aid the terrorist group or act? And, to amount to aiding and abetting, does the aid have to further the terrorist act that harmed the plaintiff?

Mapping out these issues and others, the Court canvased a wide range of possible ways of to definethe scope of liability, including some so broad as to sweep in a platform that was generally aware that ISIS used its services and that ISIS was a terrorist group.

After the close of both the Gonzalez and Taamneh arguments, it appears more likely that the Court might adopt a broad reading of Section 230 immunity and explicitly punt the issue to Congress. Even Justice Thomas, who had previously called on the Court to narrow the scope of Section 230, appeared dubious that any line could be drawn between an algorithm suggesting pilaf cooking videos or ISIS ones. He expressed serious doubt that “something that is standard on YouTube for virtually anything that you have an interest in suddenly amounts to aiding and abetting.”

Fascinatingly, the oral arguments in the two cases contained next to no discussion of speech or the First Amendment. Only Lisa Blatt—the attorney for Google—and Justice Barrett mentioned online speech in any meaningful way at all. Blatt made a strong case that Section 230 immunity was essential to the flourishing of free speech online and that limiting it would break the internet as we know it, but her focus was not on breaking the modern public square or on the importance of the case to the freedom of speech. Only Justice Barrett asked about the free speech implications of a narrow reading of Section 230 immunity for platform users, rather than companies: If a user, she asked, retweeted or liked an ISIS video, would that be creating new content, such that the user would not be protected by Section 230?

Despite the lack of discussion of free speech or the First Amendment, the outcome of these cases—as well as the NetChoice suits coming down the pike—could dramatically alter the future of speech online. A narrow reading of Section 230 immunity in Gonzalez, especially if paired with a broad understanding of aiding and abetting liability in Taamneh, would require the platforms to moderate harmful conduct much more than they currently do. And implementing more extensive moderation would force platforms to internalize more of the costs of the harms of the content they prioritize, suggest, and amplify.

Many smaller tech companies do not have the resources to do that and would likely fold, which could curtail both platform innovation and the range of speech online. The tech behemoths, meanwhile, would likely become less profitable and ubiquitous—perhaps not a bad thing, at least from an antitrust perspective. But to avoid liability and make larger-scale content moderation less expensive, they might also purge content with blunter and wider tools that take down more—perhaps much more—of the content that users post online. Blatt, arguing on behalf of Google, suggested that this would turn the internet into a place filled with “only anodyne, … cartoon-like stuff that’s very happy talk,” not a robust forum for public discourse and wide-ranging ideas.

Leaving Section 230 immunity where it stands, however, also has unappealing implications—namely, all the harms of and challenges associated with online speech that society faces today. These challenges include threats to democracy, election and health disinformation, hate speech, trolling, stalking, and harassment, as well as the economic and arguably speech-homogenizing forces of the very few platforms that now dominate online tech. Given the Court’s questions during argument, it seems most likely that those issues will be problems for the political branches to resolve—not the courts.

Gonzalez and Taamnah are the beginning, not the end, of these issues. Two equally important cases are in the pipeline and will likely be heard by the Court next year. Both are challenges filed by NetChoice, a coalition of social media companies and trade associations. They challenge state laws—one from Florida and the other Texas—that aim to fight the major platforms’ alleged censoring of conservative viewpoints. The two cases raise several fascinating and important issues, including about the constitutionality of the sorts of disclosures those laws impose.

A few of those issues are worth noting briefly here.

First, as I observe in a forthcoming paper, conservative support for limiting Section 230 immunity and the laws being challenged in the NetChoice cases marks an important turn in the conservative legal movement away from the sort of property-protective libertarianism that has long held sway. My colleague Elizabeth Pollman, has written a fantastic and related paper questioning whether a pro-deregulation court is in fact pro-business.

Second, both challenged laws impose a must-carry provision that requires platforms to carry content regardless of its viewpoint. Must-carry provisions require that companies moderate content far less than they currently do to avoid liability. This runs directly counter to the arguments set forth in Gonzalez and Taamneh, which, if successful, would significantly limit Section 230 immunity and in turn require the platforms to engage in far more content moderation to avoid liability.

Finally, the challenged laws in the NetChoice cases would essentially forbid the platforms from taking down the same content that limiting 230 immunity would require them to moderate more aggressively. A set of rulings that prevented most content moderation would deluge platforms with filth and invective, making them essentially unusable. That might sink major social media companies or cause them to pull out of must-carry states, neither of which is likely to promote broader flourishing of speech online.

Perhaps the tension between Gonzalez and Taamneh and the NetChoice lawsuits explains why Justices Thomas and Samuel Alito seemed more open to Section 230 immunity than many anticipated. Might they be holding their big guns for the NetChoice cases? Or does their hesitation instead suggest a Court more uniformly cautious of radically remaking the internet—and perhaps also still attracted to libertarian ideals? We will have to wait and see.

Regardless, these cases are just the tip of the iceberg when it comes to the legal and regulatory issues confronting tech platforms, online speech, and their harms. Justice Gorsuch alluded to that fact—presumably in relation to the debut of ChatGPT and Microsoft’s Bing—noting that “in a post-algorithm world, artificial intelligence can generate some forms of content, even according to neutral rules. I mean, artificial intelligence generates poetry, it generates polemics today.”

Evolving technology and its social implications will only raise more and more important speech questions for the courts, political branches, and all of us to face.

Amanda Shanor is an assistant professor at the Wharton School of the University of Pennsyvlvania.