Correcting a Persistent Myth About the Law that Created the Internet

Font Size:

Scholar argues that section 230 of the Communications Decency Act applies to internet platforms regardless of their “neutrality.”

Font Size:

Section 230 of the Communications Decency Act provides online platforms with strong protection from lawsuits arising from third-party content. The law, quietly passed in 1996, is responsible for the business models of social media providers, consumer review sites, Wikipedia, and so many other websites that are central to everyday life.

Section 230’s very survival is at risk, in part due to years of arrogance and missteps from some of the largest online services. Also driving criticism of section 230 is a fundamental misunderstanding of its purpose: the claim that section 230’s immunity only applies to “neutral” platforms.

I spent two years writing a book about section 230. Understanding section 230’s history is essential to informing the current debate about the law. And that history tells us that one of the main reasons for enacting section 230 was to encourage online services to moderate content.

In the early 1990s, online services such as CompuServe and Prodigy were sued for defamatory content that was created by third parties. A state court ruling from 1995 suggested that these services would receive more protection under the First Amendment if they took a hands-off approach to user content.

The prospect of disparate First Amendment protection concerned some members of Congress. At the time, the media was hyping the dangers of pornography that was freely available to children online. To address this problem, the U.S. Senate proposed the Communications Decency Act, which penalized the online transmission of indecent content.

The U.S. House of Representatives wanted to go another route: encourage online services to figure out how best to moderate content, while at the same time shielding the nascent internet industry from regulation and litigation. The solution, drafted by then- Representatives Chris Cox (R-Calif.) and Ron Wyden (D-Ore.), later becoming section 230, had two main components.

The first component contains 26 words that are at the heart of section 230’s immunity: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The second component, which has received less focus, states that online services shall not be held liable due to “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

By immunizing platforms not only from claims arising from user content, but also from platforms’ good-faith efforts to block access to a wide range of content, including that which is “otherwise objectionable,” Congress made clear that platforms—and their users—were better suited to determine whether material was suitable to stay online.

When the Cox-Wyden proposal came up for a vote in August 1995, as an amendment to the massive telecommunications overhaul bill, there was little criticism.  Those who spoke on the House floor that day made clear that they expected the bill to result in more discretion and moderation by online services.

“We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see,” Cox told his colleagues.

Texas Republican Joe Barton praised the proposal at the time, calling it “a reasonable way to provide these providers of the information to help them self-regulate themselves without penalty of law.” Virginia Republican Robert Goodlatte stated that the current legal rules create “a tremendous disincentive for online service providers to create family friendly services by detecting and removing objectionable content.”

The House passed the amendment in a 420–4 vote. Both section 230 and the Senate’s indecency proposal were included in the final Telecommunications Act, signed into law in February 1996.  The Conference Report accompanying the bill stated that Congress added section 230 to overturn earlier court decisions that exposed online services to liability “because they have restricted access to objectionable material.”  The next year, the Supreme Court struck down the indecency law as unconstitutional. Section 230 is all that remained of the Communications Decency Act.

In the more than two decades since section 230’s enactment, courts have read its 26 words quite broadly, allowing multi-billion-dollar businesses to develop around user-generated content. Imagine if Yelp were responsible for every user review, or if Facebook were responsible for every post. Their business models simply could not exist in their current forms.

Whether you think that would be good or bad is an entirely different—and valid—question. I am far from being an absolutist when it comes to section 230, particularly after having spent years reviewing devastating cases in which people were harmed, in part due to irresponsible or even malicious platforms. I am conflicted about a number of aspects about section 230, but one thing is clear: Section 230’s applicability never hinged on a platform being “neutral.”

Section 230’s “findings” states that the internet offers “a forum for a true diversity of political discourse.” Nothing in section 230’s history, however, suggests that this goal requires platforms to be “neutral.” Indeed, section 230 allows platforms to develop different content standards, and customers ultimately can determine whether those standards meet their expectations.

There also is a practical problem with expecting platforms to be “neutral.”  How would neutrality look in practice? I do not know. There is a wide swath of speech that some might consider to be legitimate political discourse, even though others might consider that same speech to be hateful.  As Kate Klonick, Sarah Roberts, Tarleton Gillespie, and other scholars have persuasively demonstrated, moderation is incredibly difficult both from a technological and human labor perspective. Adding a requirement that platforms are “neutral” would not make this task any easier.

Of course, the current system is not without its flaws. Until the past few years, platforms were not nearly transparent enough about precisely how they made important judgment calls to remove content, and I believe they should be more forthcoming. Their decisions effectively determine whether a person can speak freely online.

Moreover, such market-based determinations were easier in the early days of the internet, when platforms did not have the same scale and reach that they do today. It was easier for a user to walk away from CompuServe in 1996 than it is to walk away from Facebook or Twitter today.

My biggest concern with the current system is not too much moderation, but too little moderation. Harmful content continues to proliferate, and despite their stepped-up efforts both at machine learning and human content moderation, platforms will never be able to catch everything.

I doubt many people are entirely happy with the current state of the internet and content moderation. But they must ask what it would look like without section 230. Under such a regime, the two most likely options for a risk-averse platform would be either to refrain from proactive moderation entirely, and only take down content upon receiving a complaint, or not to allow any user-generated content.

Neither approach would constitute an improvement over the current state of the internet under section 230. Under the first option, vile, hateful, and harmful content would be pervasive, as websites would fear the increased liability that would result from their moderation practices. The second option would render a website more like a traditional newspaper or television station, rather than the public squares that so many users have come to expect.

The next few years likely will determine the fate of section 230, which will shape what the internet looks like for decades to come. Congress could take a number of possible moves. Whatever step it takes, Congress must choose how to proceed only after great deliberation and a grounding in the history of the law that created the internet that we know today.

Jeff Kosseff

Jeff Kosseff is an assistant professor in the United States Naval Academy’s Cyber Science Department.

The views expressed belong only to Jeff Kosseff and do not represent the Naval Academy, Department of Navy, or Department of Defense.