Reconsidering the Internet’s Liability Shield

Font Size:

Section 230 of the Communications Decency Act faces criticism from across the political spectrum.

Font Size:

The law that “created the internet,” Section 230 of the Communications Decency Act, protects online platforms from suits based on harmful content posted to their site by third parties. Courts have taken a broad view of what this protection covers, establishing an immunity that shields some of the biggest companies in the world from lawsuits deriving from hate speech and other extremist content.

Since Congress enacted the law in 1996, Section 230 has allowed major technology companies to grow with a reduced risk of lawsuits resulting over content hosted on their pages. In recent years, Section 230 barred a defamation claim against Yelp for a bad business review and stopped a lawsuit against Facebook for content that allegedly inspired terrorist attacks by Palestinian militant nationalist organization Hamas.

President Donald J. Trump pushed Section 230 into the public spotlight last year when he called for Congress to repeal the law—the latest episode in a debate that has ensnared both Republicans and Democrats for years.

Criticism of Section 230 spans the political divide, although the critique varies by party. Critical Democrats tend to claim that the law “allows tech companies to get away with not moderating content enough,” and Republican opponents favor arguments that Section 230 lets companies “moderate too much.” Proponents of Section 230, conversely, paint the law as a key protection for freedom of expression online.

Although President Trump no longer holds office, the future of Section 230 is far from secure. Before taking office, President Joe Biden criticized the law, suggesting that it should be “revoked, immediately.” Calls to modify or repeal Section 230 also re-emerged after major tech platforms banned President Trump.

As the U.S. Naval Academy’s Jeff Kosseff proposed, however, “there is a good chance that even small adjustments could have big impacts, so any changes to this important law must be deliberate and informed.”

This week’s Saturday Seminar focuses on Section 230 of the Communications Decency Act and how Congress should regulate liability for speech on the internet.

  • In a recent article in the Case Western Reserve Journal of International Law, David Sloss of the Santa Clara University School of Law argues that Congress should modify Section 230 to permit civil lawsuits against social media companies for complicity in mass atrocities such as genocide or crimes against humanity. A duty to remove content that could incite crimes against humanity may prompt social media companies to police that content on their platforms, Sloss claims. In making this argument, Sloss notes several limitations, including difficulties defining which companies the exception would cover, challenges related to the subjective nature of identifying posts for removal, and risks to First Amendment protections to free speech.
  • Section 230 affords technology companies “more substantive and procedural benefits than the First Amendment does,” Eric Goldman of the Santa Clara University School of Law argues. In an article in the Notre Dame Law Review Reflection, he claims that defendants have invoked Section 230 in cases where First Amendment defenses may have little impact—such as issues of privacy disputes, intentional infliction of emotional distress claims, and negligence allegations. Arguments suggesting that the two doctrines substitute for each other are misguided because Section 230 protects more types of speech and expression than the First Amendment does, Goldman contends.
  • Danielle Citron of the University of Virginia School of Law and Benjamin Wittes of the Brookings Institution argue that Section 230’s “immunity is too sweeping.” In an article in the Georgetown Law Technology Review, the scholars criticize the categorical immunity that Section 230 offers to otherwise illegal activities “merely because they happen online.” To avoid this injustice, Citron and Wittes encourage Congress to amend the statute to condition immunity “on a service provider taking reasonable steps to prevent or address unlawful third-party content that it knows about.” Such an amendment, they contend, would protect online abuse victims without undermining free speech protections on internet platforms.
  • Section 230’s approach to immunity is not practical for the modern internet, according to the authors of a Fordham Law Review article. Madeline Byrd of Alston and Bird and Katherine Strandburg of the New York University School of Law explain that smart internet services—such as those that run ad targeting that relies on “data-driven personalized models of user behavior”—may violate anti-discrimination laws by targeting specific groups. In those cases, Byrd and Strandburg argue, Section 230 effectively gives internet providers immunity for conduct attributable to the company, because the company possesses full control of platform design. They propose a series of small amendments to Section 230 that would limit service provider immunity in this space.
  • Congress should embrace the original intent behind enacting Section 230 instead of the “the 21st century narrative” that the law creates an unfair protection for technology companies, former U.S. Representative Christopher Cox (R-Calif.) claims in a Richmond Journal of Law and Technology blog post. Cox, a coauthor of Section 230, argues that the drafters designed the statute to allow user-generated content to flourish on the internet. The text of the law, however, does not protect internet platforms involved in creating illegal content, and Section 230 immunity does not shield these companies from liability, he suggests. Policymakers should consider Section 230’s “fundamental principles and purposes” when applying the law to the modern internet, according to Cox.
  • Section 230 may prevent internet companies from finding ethical solutions to long-term problems, Pennsylvania State University’s Benjamin W. Cramer argues in a Journal of Information Policy article. Although Section 230 offers “useful legal protection” to social media platforms, the law also gives these companies an excuse “to avoid any discussion of their own ethical responsibilities,” according to Cramer. He claims that internet companies should prioritize corporate responsibility and citizenship by adopting proactive policies that establish the types of behavior barred from companies’ platforms and promote more aggressive moderation of user-generated content. This approach, Cramer proposes, could increase industry competition and decrease the need for more regulation.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.