Understand the Internet’s Most Important Law Before Changing It

Font Size:

To determine whether to modify liability protection for internet companies, more information will be needed.

Font Size:

When I started writing a book about an arcane internet law more than three years ago, I never could have predicted the controversy that I would encounter.

My book, The Twenty-Six Words That Created the Internet, tells the history of Section 230 of the Communications Decency Act, a 1996 law that protects online platforms from liability for many types of user content. The most important 26 words of Section 230 state: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Those words have fostered the revolutionary business models of Facebook, Wikipedia, Twitter, and many other vehicles for free speech. In some cases, the 26 words also have protected some platforms that have turned a blind eye toward, and even enabled, defamation and other serious harms.

Once my publisher had announced my book, but still months before its release in April, I received emails, tweets, direct messages, and even phone calls from people who were furious about the book’s title and what they assumed to be my stance on Section 230. Wait to read the book, I suggested. Let us just say that this suggestion was not received by many of my critics as a welcome one.

The debate has become even more inflamed throughout this past year, as platforms have received even more criticism for over-moderation or under-moderation. My experience over the past year has taught me that Section 230 is no longer an obscure technology policy, but a hotly debated law that some believe is essential to online free speech, and others believe allows some of the nation’s most prosperous companies to recklessly endanger their customers.

Many people hold strong views about Section 230’s future, yet these views are not always supported by solid facts. That needs to change. As online platforms play an increasingly central role in daily life, policymakers need to understand better how and why the platforms moderate harmful content. They also need to understand the role that federal law can play in making the internet safer while maintaining the free speech that has defined the modern internet since its infancy.

The criticisms of Section 230 vary, and, in some cases, they contradict one another. Some critics argue that dominant platforms like Twitter and YouTube are biased against particular political viewpoints. The companies censor people who hold certain political viewpoints, they argue.

A second group claims the online giants are not moderating enough. They point to the widespread social media dissemination of a video of the Christchurch, New Zealand shooting, foreign propaganda intended to influence U.S. elections, and other harmful content.

A third camp—the Section 230 absolutists—say that the status quo is fine, and even minor changes to Section 230 will cause the internet to collapse.

All of these groups may have some valid points. We do not know with certainty, because we have so little information about how platforms actually are moderating and what else they could be doing. Internet companies have long been secretive about their user content operations, though fortunately they have started to provide more public data about their practices as they face more scrutiny. And until recently, policymakers had not devoted much attention to how these moderation decisions affect users. Perhaps most importantly, moderation and Section 230 are complicated and not easily explained in soundbites.

I do not think that changing Section 230 would necessarily upend the internet as we know it. But there is a good chance that even small adjustments could have big impacts, so any changes to this important law must be deliberate and informed. It is hard to be informed without much information.

To inform the debate, Congress should create a commission with broad authority to gather facts about platforms and moderation. Congressional commissions have informed the debates about national security, financial sector reform, cybersecurity, and many other crucial issues of the day. Internet companies play such a central role in our daily lives that Congress should take a similarly thoughtful approach when evaluating the legal playing field.

A new commission should attempt to answer many important questions, including: How do platforms develop their moderation policies? Who reviews decisions to block particular users? How effective is artificial intelligence-based moderation? What could platforms do to improve their moderation? How does moderation differ across companies?

The members of a new commission should be experts in the complex legal and technological issues that surround Section 230 and platform moderation. They should represent all views and stakeholders, such as victims’ advocates, state law enforcement, the technology sector, and civil liberties groups. Most importantly, the members should arrive with open minds and a desire to inform the debate with facts.

Once it has a better grasp on what platforms are—and are not—already doing to fairly moderate objectionable content, the commission could then recommend what changes, if any, Congress should make to Section 230 and other laws that affect online speech. Section 230 is too important for its future to be determined by hyperbole and anecdotes.

Section 230 created the internet as we know it today. Now we must decide how we want the internet to look for the next 20 years. A levelheaded examination of the complex technological and legal issues is essential for that decision.

Jeff Kosseff

Jeff Kosseff is an assistant professor in the U.S. Naval Academy’s Cyber Science Department. 

The views expressed belong only to Jeff Kosseff, and do not represent the Naval Academy, Defense Department, or Department of Navy.