A New Approach to Understanding Content Moderation

Scholar proposes an alternative First Amendment framework for online platform regulations.

In the wake of Moody v. NetChoice, a 2024 U.S. Supreme Court decision that declined to decide whether laws restricting online platforms’ discretion over which user-generated content to host violate the First Amendment, lower courts continue to struggle with the question.

In a recent article, Kyle Langvardt, a professor at University of Nebraska–Lincoln College of Law, and Alan Rozenshtein, a professor at University of Minnesota Law School, argue that the “editorial discretion” doctrine under the First Amendment distorts discussions of whether and how policymakers can regulate online platforms and propose a new analytical framework.

The editorial discretion doctrine holds that media owners’ control over what content to carry on their platforms is protected by the First Amendment. It follows that if an online platform’s decision about whether to host certain user-generated content can be characterized as an editorial activity, its decision is protected by the First Amendment.

Moody arose from a circuit split between the U.S. Court of Appeals for the Fifth Circuit and the Eleventh Circuit on whether to treat online platforms as “editors” when they engage in content moderation practices.

The Fifth Circuit case involved a Texas statute that introduced a neutrality requirement prohibiting online platforms from moderating content based on its viewpoint. Categorizing private platforms’ content moderation practices as “censorship,” the Fifth Circuit held that the Texas statute did not violate the First Amendment because “censorship” is not constitutionally protected speech.

Conversely, reasoning that content moderation constitutes an editorial activity, the Eleventh Circuit held that a Florida statute mandating that platforms carry speech from certain speakers, such as journalists and political candidates, violated the First Amendment.

The Moody decision, however, did not resolve the circuit split.

Rather than addressing the constitutional question on whether platforms act as editors when managing their user-generated content, the Supreme Court returned the case to lower courts to conduct a more comprehensive analysis on the impact of the laws.

The opinions in Moody suggest that Justices disagree on this issue. Justice Elena Kagan, writing for the majority, argued that content moderation practices are editorial choices. In contrast, Justice Samuel Alito questioned in a concurring opinion whether all content moderation practices—particularly those implemented by artificial intelligence—should be protected by the First Amendment.

Langvardt and Rozenshtein suggest that lower courts should move beyond the editorial analogy for both doctrinal and normative reasons.

Langvardt and Rozenshtein explain that doctrinally, the only binding part in Moody is to return the case to lower courts. The majority opinion’s conclusion that content-moderation practices are editorial decisions is mere dicta, the non-binding part of a judicial opinion that does not control lower courts, underscore Langvardt and Rozenshtein.

Normatively, the complexity of content moderation practices makes the “editor” analogy an ill-suited framework, argue Langvardt and Rozenshtein. They note that content moderation involves a wide variety of mechanisms—such as removing content, deplatforming users, and demoting or promoting certain content—that cannot be characterized as one single expressive activity. The diversity of content moderation approaches resists a binary framework that treats platforms as either editors or not editors, explain Langvardt and Rozenshtein.

Rather than categorically asking whether platforms are “editors,” Langvardt and Rozenshtein propose that courts apply a case-by-case, means-ends analysis instead.

Under this approach, courts would examine both the burden on expression and the government’s justification for any given regulation, Langvardt and Rozenshtein explain. In particular, they note that courts should balance three distinct sets of First Amendment interests: the platforms that host the content, the users who create it, and the general public who consumes it.

In terms of platforms’ interest, Langvardt and Rozenshtein argue that whether requiring a platform to host certain content would infringe its First Amendment rights depends on various factors, such as the platform’s size and whether the platform presents itself as a neutral or a curated space. For example, a niche platform promoting a particular viewpoint may have greater expressive interests than a large platform posing as a public square, Langvardt and Rozenshtein suggest.

As for users’ interest, Langvardt and Rozenshtein observe that users’ First Amendment right may conflict with that of the platforms’. They note that because privately owned digital platforms have become the primary space for the general public to express themselves, they potentially exert greater control over expression than the government. As a result, regulations limiting a platform’s First Amendment right could promote users’ free speech right by ensuring access to privately owned channel of communication, contend Langvardt and Rozenshtein.

When it comes to the general public, its interest as listeners is protected by the First Amendment, explain Langvardt and Rozenshtein. Because constitutional protection for free expression exists to foster meaningful public discourse, listeners have legitimate interests in avoiding certain low-value expressions—such as hate speech and disinformation—that would otherwise divert their attention from more valuable expressions, argue Langvardt and Rozenshtein.

Langvardt and Rozenshtein caution that although regulations limiting platforms’ discretion on what speech to carry may promote users’ and the public’s First Amendment rights, poorly designed regulations could backfire because they may degrade platform quality.

Texas’s and Florida’s laws provide two such examples, Langvardt and Rozenshtein conclude. The Texas law, which requires viewpoint neutrality, may suppress speech because online platforms may choose not to host content on controversial topics entirely to avoid having to carry speech with certain viewpoints. The Florida law, which contains a must-carry requirement, could drive users away because platforms may be flooded with content that users do not wish to be exposed to.