Section 230 and AI-Driven Platforms

Scholars examine how a dated law shapes liability for artificial intelligence used by social media platforms.

Grok, an AI chatbot designed by xAI, is the subject of recent global scrutiny after generating sexually explicit images of nonconsenting users. Efforts to hold the platform liable hinge on the interpretation of Section 230 of the Communications and Decency Act.

Section 230 generally shields platforms from civil liability for third-party content. For example, under Section 230, Meta generally would not be held liable for illegal speech inciting violence posted on its platform by a user.

This traditional application of Section 230 presumes that a user posts content, and the platform acts as an intermediary content host.

However, artificial intelligence does not fit squarely into this user-host dichotomy. AI disrupts the traditional application of Section 230 in two main ways: AI as a content generator and AI as a content curator.

First, though a user can prompt specific and novel output, AI-generated content cannot be attributed solely to that user. Likewise, the generative-AI (GAI) chatbot also cannot be considered the sole speaker, as its training data does not originate from the platform and generated outputs depend on user prompts. Ambiguity over the identity of the “speaker” undermines the foundation of Section 230 speaker-based liability.

Even when users create content, AI algorithms often determine that content’s reach and impact on the host social media platform. For example, social media platform TikTok’s “For You” feed or video streaming platform YouTube’s recommendation system can rapidly amplify particular posts to massive audiences based on users’ predicted engagement with the content.

The assumption underlying Section 230—that platforms act as neutral conduits of information—becomes questionable when platforms actively design and implement recommendation algorithms that promote or suppress speech.

And some platforms like X now use  GAI bots as platform moderators. AI-moderators, like Grok, both police and contribute to platform content as designed by its developers.

Although platforms have no obligation to monitor content under Section 230, the U.S. federal law recently signed by President Donald Trump known as the Take It Down Act imposes liability for a platform’s failure to remove intimate images after explicit notification by the Federal Trade Commission of their existence on the platform.

In this week’s Saturday Seminar, scholars debate the application of Section 230 to platforms employing generative artificial intelligence or recommendation algorithms.

  • In a Harvard Journal of Law & Technology article, practitioner Graham Ryan warns that GAI litigation will force courts to reevaluate Section 230 immunities afforded to internet content platforms. Ryan predicts that courts will not extend Section 230 immunities to GAI where they materially contribute to the development of content. He contends that litigation will also define whether AI designers are entitled to heightened Section 230 protections and warns that such decisions could unsettle legal precedents beyond AI, potentially reopening questions of social media platform liability. Ryan notes that, alongside broad publisher-immunity cases, newer legal decisions assess liability in relation to a platform’s conduct or design. He urges designers to anticipate this shift through careful data governance and system transparency.
  • In a Yale Law JournalessayUniversity of Colorado’s Margot Kaminski and Georgetown University’s Meg Leta Jones argue that crafting regulation based on a new tool’s technical aspects and capabilities risks overlooking the normative value decisions inherent in deploying and regulating any new technology. Kaminski and Jones advocate a “values-first” approach wherein the legal community should define the societal values that regulators and AI designers seek to advance before regulating GAI outputs. They map competing legal constructions that attribute AI outputs to the AI tool, the user, or the developer, demonstrating how each construction’s  liability allocation advances distinct normative and social values. Kaminski and Jones conclude that a values-first approach enhances democratic choice and may yield more accountable policy design.
  • In a Yale Journal on Regulationessay, the University of Minnesota’s Alan Rozenshtein argues that Section 230 is deeply ambiguous: Its grants of “publisher or speaker” immunities can be read broadly to bar most suits or narrowly to allow liability for hosting or promoting harmful content. He notes that content recommendation algorithms pose a difficult case, as prioritizing certain content and views is bound up in normative values, and courts may lack the democratic legitimacy to make such values-based decisions. Rozenshtein argues that courts should look to Congress’ intent when interpreting Section 230 while recognizing an ongoing dialogue with Congress. Rozenshtein suggests that judicial interpretations narrowing Section 230 immunities would prompt Congress to clarify its intent, improving accountability and legitimacy.
  • In an article for the Seattle Journal of Technology, Environmental & Innovation Law, practitioner Louis Shaheen investigates the application of Section 230 to content produced by GAI. Shaheen details the legislative history of Section 230, finding that courts have consistently presumed cases should be “resolved in favor of immunity.” Applying the traditional Section 230 framework to GAI, he concludes that the law’s language effectively shields GAI platform defendants from liability because those platforms qualify as interactive computer services with outputs stemming from third-party user input and prompts. Shaheen argues this conception of Section 230 immunity, though statutorily supported, is both overbroad and harmful, and argues that taking preventative measures to prevent harm should be a prerequisite for receiving the Section’s protections.
  • In a comment for the Washington Law Review, practitioner Max Del Real argues that recommendation algorithms were not contemplated in Section 230, despite efforts from courts to adapt the rule to modern technology. He contends that Section 230 did not account for personalized internet engagement when it was drafted. Del Real presents strategies to negate Section 230 immunity for recommendation algorithms, paving the way for platforms to be held liable for harmful GAI content. Del Real contends that prong three of Section 230—examining whether the defendant materially contributed to the creation of the content—offers a stronger basis for overcoming immunity than focusing on prong two, which considers whether the defendant acted as a publisher or speaker.
  • In an article published in the Penn Journal of Philosophy, scholar Veronica Arias from the University of Pennsylvania argues for a nuanced, flexible application of Section 230 to GAI models. Such an approach, she contends, would avoid regulations curtailing technological innovations while adapting the existing rule to protect platform users. Arias uses term “black box phenomenon” to refer to the issue at the heart of liability roadblocks for GAI: Though its developers train their models themselves, all content comes from third parties. She argues that the GAI platform cannot be considered a speaker. Arias argues that policymakers, rather than courts, should take the lead in examining how Section 230 applies to generative AI, avoiding abrupt judicial rulings that suddenly require its application.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.