Chatting About ChatGPT Regulation

Font Size:

Scholar evaluates potential legal frameworks for regulating ChatGPT.

Font Size:

What do law, business, and medical students have in common? They can all use ChatGPT to take their graduate-level exams for them—and apparently pass.

ChatGPT’s powers have elicited concerns over both misinformation and academic cheating, prompting policymakers to call for ChatGPT regulation to proactively restrain the artificial intelligence (AI) tool in its early stages. One of the head creators of ChatGPT, Mira Murati, has also recognized that government intervention is important to prevent bad actors from abusing the technology.

In a recent article, Roee Sarel of the University of Hamburg questions whether public regulation is the appropriate way to prevent ChatGPT’s potential harms. Instead, he proposes mitigating harms through tort liability and also explores a combined regulatory and tort approach.

ChatGPT is a chatbot tool that uses AI to craft sophisticated, detailed responses to user prompts. Users can ask ChatGPT to write a poem or college admissions essay, for example, and ChatGPT then delivers impressive human-like answers—although sometimes factually incorrect ones.

Sarel argues that legal intervention is economically justified only when there is a market failure caused by externalities, or when someone engages in an activity that has negative repercussions to third parties. ChatGPT could harm people, for example, if a lawyer relies on the tool and gives incorrect legal advice to a client, Sarel suggests. He also argues that ChatGPT could cause harm to society by spreading misinformation.

Sarel then evaluates four factors to determine whether regulation or tort liability should be the legal intervention to correct ChatGPT’s negative externalities.

First, Sarel considers whether regulators or private actors would have more knowledge about the activities sought to be restrained. He argues that if AI creators have more knowledge about the activities’ potential harms, then liability is preferable to regulation. Sarel concludes that, although policymakers may have access to technology experts, AI creators overall have a better understanding of the technology’s capabilities. If ChatGPT’s creators know that they could be sued, Sarel contends, they are in the best position to know how to constrain the technology to prevent future harm.

Second, Sarel questions whether ChatGPT’s creators would be able to pay for the harm caused by the technology. If an injurer does not have sufficient assets to pay, then liability would be ineffective and regulation should be preferred, Sarel argues. As long as ChatGPT creators have resources to pay for the harm, liability is the appropriate intervention, Sarel contends.

Third, Sarel evaluates the likelihood of ChatGPT creators facing a lawsuit. He argues that injurers are less likely to be sued when the harm is dispersed, making it difficult to identify a victim with a valid reason to bring a suit. If ChatGPT causes harm by spreading misinformation, Sarel explains, the damage will be sufficiently dispersed throughout society such that it would be difficult for any particular victim to have enough of an incentive to sue the creators. This factor, Sarel concludes, supports regulation over liability.

Fourth, Sarel considers the administrative costs of regulation. Restraining ChatGPT through an AI regulatory agency could be costly and ineffective, Sarel argues, because the technology is used internationally. He explains that if one country imposes regulations on ChatGPT, the creators could simply relocate.

Sarel concludes that these factors likely disfavor regulation, depending on the likelihood of AI creators being threatened with lawsuits and having the resources to pay for the harm.

Sarel notes, however, that regulation and liability could be combined. A combined regulatory and liability framework is similar to the European Union’s proposed approach to control AI,  Sarel suggests. He explains that the EU’s Artificial Intelligence Act would impose regulations on states to monitor AI, and the Artificial Intelligence Liability Directive would impose tort liability by allowing users to bring lawsuits against AI creators.

But Sarel argues that combining liability and regulation could lead to legal uncertainty and inefficient incentives. He contends that the EU’s framework may cause victims and ChatGPT creators to be confused as to what the legal standard is. This confusion could lead to over-deterrence or under-deterrence and market inefficiencies, Sarel explains. He argues that the EU combined approach fails to articulate a clear legal standard that considers potential effects on incentives.

Unlike the EU, the United States does not have a concrete regulatory framework for AI, Sarel notes. In drafting future policy, lawmakers need to consider the potential implications that the legal framework will have on incentives and market efficiency, he concludes.