Punting Social Media Company Liability to Congress

Font Size:

The Supreme Court affirms social media companies’ business models, dodging for now the issue of liability for harmful speech.

Font Size:

When the U.S. Supreme Court granted certiorari last year in two cases involving the liability of social media companies for harmful speech on their platforms, it looked as if the Court planned to alter, perhaps radically, the future of the internet. Although both cases involved harms connected to acts of international terrorism, they held the potential to restructure the incentives, business models, and design possibilities that shape online speech more broadly.

Those cases—Twitter v. Taamneh and Gonzalez v. Google—were recently decided by the Court. To the surprise of some observers, the Court declined to reshape the legal risks for tech companies. Instead, the Court punted the key issues presented to Congress, leaving the law and economic incentives around harmful speech on the internet approximately where they had been.

This is not to say that the cases or their resolution are unimportant for the law, free speech, or other pending cases. On the contrary, how the Court dodged the key questions in these cases is consequential—as is what the decisions may mean for the future.

Gonzalez and Twitter addressed distinct but intertwined questions. The case with larger potential implications for the future of the internet and its business models, Gonzalez, involved the scope of Section 230 of the Communications Decency Act. That provision extends immunity to internet platforms for liability that might otherwise arise from their publication of others’ content, not only under the Antiterrorism Act but under other state and federal laws as well. Section 230 is often dubbed “the 26 words that created the internet” due to its pivotal role in shielding fledgling internet companies from liability and so allowing them—and the attention-based business models on which they are based—to flourish. For this reason, Gonzalez had the potential to fundamentally rejigger the legal parameters of tech responsibility.

Twitter raised the narrower but critical issue of whether Twitter, Facebook, and Google—the latter of which owns YouTube—could be held liable for aiding and abetting international terrorism under the federal Antiterrorism Act because their algorithms recommended ISIS-related content or because they provided a means of communication or recruiting connected to acts of international terrorism.

After argument, it appeared that the justices were hesitant to upend the status quo. Writing for a unanimous Supreme Court, Justice Clarence Thomas, who had earlier called on the justices to rein in Section 230 immunity, found an off-ramp to avoid doing so in Twitter.

The Court in Twitter held that companies are not liable for aiding and abetting terrorism merely because they offer social media platforms to the public at large, even if bad actors such as ISIS use those platforms for nefarious ends. The Court also held that those companies’ use of engagement-based algorithms to recommend content to users does not meet the standard for aiding and abetting liability, even if those algorithms promote ISIS content to interested parties. For that reason, the Court declined in Gonzalez to rule on whether a platform would be immune from liability under Section 230.

One of the most interesting parts of the opinion in Twitter is the Court’s embrace of engagement-promoting algorithms as a neutral and necessary component of a service offered to the public at large, much like cell phones. This conclusion reflected a core line of the justices’ questioning at oral argument. Many expressed concerned about whether it is possible to draw a line between the ways that YouTube’s algorithm and design choices suggest ISIS videos to certain users and every other way that a platform organizes content—by practice and necessity. That line-drawing issue is critical because, if organizing content alone constitutes a platform’s speech, rather than a third-party user’s speech, Section 230 would become a dead letter.

Avoiding that result, the opinion drew a distinction between an intentionally pro-ISIS algorithm—or other intentionally pro-ISIS action taken by a platform—and a customer-interest-targeting one, which aims to increase profits and usage and defines the attention-based economy, even if it has the effect of prioritizing ISIS content. Offering the public a communications platform that promotes content based on customers’ interest and data on their engagement is too attenuated for aiding and abetting liability to attach, the Court concluded.

Justice Jackson also wrote a short concurring opinion stressing that the decision was tied to the specific facts alleged and that “other cases presenting different allegations and different records may lead to different conclusions.”

The Court notably took a similar approach last month in United States v. Hansen, a case involving a free speech challenge to a federal law that criminalizes “encouraging or inducing” an immigrant to come or remain in the United States unlawfully. As in Gonzalez and Twitter, the Court used the law of secondary liability to avoid major questions involving the effects of harmful speech.

At the same time, a ruling that narrowed Section 230 immunity in Gonzalez, especially paired with a decision permitting liability in Twitter, would have created a roadblock for a later court interested in upholding the Texas and Florida laws that aim to fight the platforms’ alleged censoring of conservative viewpoints. Those laws are now being challenged on First Amendment grounds in the NetChoice cases. Those cases are likely to come before the Supreme Court next term. Creating more liability for the platforms in Gonzalez and Twitter would have required the companies to increase their content moderation, while the Texas and Florida laws, which include must-carry provisions, require the platforms to moderate less—setting up a possible tension between state and federal law. The Court’s dodge in Gonzalez and Twitter allows it to write on a relatively blank slate in the NetChoice cases.

Are the justices holding their punches for the NetChoice cases, which might financially sink social media companies or force them out of certain states? Does the Court’s hesitation to make big speech-altering moves in Gonzalez and Twitter, as well as in Hansen, instead suggest that the Court has become more cautious of big moves generally, perhaps on account of increased scrutiny and legitimacy concerns? Or has the Court simply started to grapple with the inherent tensions between property rights and speech libertarianism, as it begins to chart out a new, perhaps less speech-protective, course? We will have to wait and see.

What is certain is that Twitter and Gonzalez are the beginning, not the end, of the brewing issues between harmful speech, attention-based business models, free speech principles, Congress and the courts.

Amanda Shanor is an assistant professor at the Wharton School of the University of Pennsylvania.

This essay is a part of a nine-part series entitled The Supreme Court’s 2022-2023 Regulatory Term.