
Scholar proposes applying product liability principles to strengthen AI regulation.
In a world where artificial intelligence (AI) is evolving at an exponential pace, its presence steadily reshapes relationships and risks. Although some actors can abuse AI technology to harm others, other AI technologies can cause harm without malicious human intent involved. Individuals have reported forming deep emotional attachments to AI chatbots, sometimes perceiving them as real-life partners. Other chatbots have deviated from their intended purpose in harmful ways—such as a mental health chatbot that, rather than providing emotional support, inadvertently prescribed diet advice.
Despite growing public concern over the safety of AI systems, there is still no global consensus on the best approach to regulate AI.
In a recent article, Catherine M. Sharkey of the New York University School of Law argues that AI regulation should be informed by the government’s own experiences with AI technologies. She explores how lessons from the approach of the Food and Drug Administration (FDA) to approving high-risk medical products, such as AI-driven medical devices that interpret medical scans or diagnose conditions, can help shape AI regulation as a whole.
Traditionally, FDA requires that manufacturers demonstrate the safety and effectiveness of their products before they can enter the market. But as Sharkey explains, this model has proven difficult to apply to adaptive AI technologies that can evolve after deployment—since, under traditional frameworks, each modification would otherwise require a separate marketing submission, an approach ill-suited to systems that continuously learn and change. To ease regulatory hurdles for developers, particularly those whose products update frequently, FDA is moving toward a more flexible framework that relies on post-market surveillance. Sharkey highlights the role of product liability law, a framework traditionally applied to defective physical goods, in addressing accountability where static regulations fail to manage the risks that emerge once AI systems are in use.
FDA has been at the vanguard of efforts to revise its regulatory framework to fit adaptative AI technologies. Sharkey highlights that FDA shifted from a model emphasizing pre-market approval, where products must meet safety and effectiveness standards before entering the market, to one centered on post-market surveillance, which monitors products’ performance and risks after AI medical products are deployed. As this approach evolves, she explains that product liability serves as a crucial deterrent against negligence and harm, particularly during the transition period before a new regulatory framework is established.
Critics argue that regulating AI requires a distinct approach, as no prior technological shift has been as disruptive. Sharkey contends that these critics overlook the strength of existing liability frameworks and their ability to adapt to AI’s evolving nature.
Sharkey argues that crafting pre-market regulations for new technologies can be particularly difficult due to uncertainties about risks.
Further, she notes that regulating emerging technology too early could stifle innovation. Sharkey argues that product liability offers a dynamic alternative because, instead of requiring regulators to predict and prevent every possible AI risk in advance, it allows agencies to identify failures as they occur and adjust regulatory strategies accordingly.
Sharkey emphasizes that FDA’s experience with AI-enabled medical devices serves as a meaningful starting point for developing a product liability framework for AI. In developing such framework, she draws parallels to the pharmaceutical’s drug approval process. When a new drug is introduced to the market, its full risks and benefits remain uncertain. She explains that both manufacturers and FDA gather extensive real-world data after a product is deployed. In light of that process, she proposes that the regulatory framework should be adjusted to ensure that manufacturers either return to FDA with updated information, or that tort lawsuits serve as a corrective mechanism. In this way, product liability has an “information-forcing” function, ensuring that manufacturers remain accountable for risks that surface post-approval.
As Sharkey explains, the U.S. Supreme Court’s decision in Riegel v. Medtronic set an important precedent for the intersection of regulation and product liability. The Court ruled that most product liability claims related to high-risk medical devices approved through FDA’s pre-market approval process—a rigorous review that assesses the device’s safety and effectiveness—are preempted. This means that manufacturers are shielded from state-law liability if their devices meet FDA’s safety and effectiveness standards. In contrast, Sharkey explains that under Riegel, devices cleared under FDA’s pre-market notification process do not receive the same immunity, because that pathway does not involve a full safety efficacy review but instead allows devices to enter the market if they are deemed “substantially equivalent” to existing ones.
Building on Riegel, Sharkey proposes a model in which courts assess whether a product liability claim raises new risk information that was not considered by FDA in its original risk-benefit analysis at the time of approval. Under this framework, if the claim introduces evidence of risks beyond those previously weighted by the agency, the product liability lawsuit should be allowed to proceed.
Sharkey concludes that the rapid evolution of AI technologies and the difficulty of predicting their risks make crafting comprehensive regulations at the pre-market stage particularly challenging. In this context, she asserts that product liability law becomes essential, serving both as a deterrent and an information-forcing tool. Sharkey’s model presents a promise to address AI harms in a way that accommodates the adaptive nature of machine learning systems, as illustrated by FDA’s experience with AI-enabled technologies. Instead of creating rules in a vacuum, she argues that regulators could benefit from the feedback loop between tort liability and regulation, which allows for some experimentation of standards before the regulator commits to a formal pre-market rule.