Should Regulators Have a Chat With ChatGPT?

Font Size:

Scholars evaluate proposed frameworks for regulating ChatGPT and other generative AI.

Font Size:

British Member of Parliament Luke Evans recently delivered a speech that discussed numerous policy issues, including Brexit and health care. But Evans did not write his own speech, nor did he receive help from any other human—ChatGPT wrote it.

ChatGPT is an artificial intelligence (AI) chat box tool that answers questions to inputted prompts. The technology is an example of generative AI, which is AI that uses algorithms to generate content such as essays, music, and art. Users can converse with the tool by asking ChatGPT a series of questions, to which ChatGPT can deliver sophisticated and creative responses. In response to user prompts, ChatGPT can write poetry, plan a vacation, or pass law and business school exams.

ChatGPT’s impressive capabilities resulted in the tool’s popularity exploding shortly after it was released in November 2022. Two months after ChatGPT was released to the public, ChatGPT accumulated 100 million monthly active users and set a record for the fastest-growing consumer application in history.

ChatGPT’s success stimulated a growth in generative AI technology. Since ChatGPT’s release, other generative AI tools have emerged, such as Bing’s new AI-powered search engine.

While ChatGPT and generative AI’s powerful potential has sparked excitement, some experts worry that ChatGPT—which sometimes produces inaccurate responses—may spread misinformation. In addition, other experts have expressed concern that the tool may replace workers.

Some policymakers have urged lawmakers to regulate ChatGPT while the tool is still relatively new to prevent these potential harms. Mira Murati, the creator of ChatGPT, has also spoken in favor of ChatGPT regulation to prevent malicious uses of the technology. In the United States, the Biden Administration’s Blueprint for an AI Bill of Rights provides principles for responsible AI implementation, including calls for “algorithmic discrimination protections,” but there are currently no binding U.S. regulatory restraints on generative AI. In the European Union, the proposed AI Act aims to regulate the commercial and government use of AI, including generative AI. The AI Act, however, does not address the malicious uses of AI that Murati anticipates.

This week’s Saturday Seminar surveys scholarship discussing the potential harms of ChatGPT and evaluating ways to regulate generative AI.

  • In a working paper, Glorin Sebastian of Google evaluates the cybersecurity risks related to AI chatbots, including ChatGPT. Although approximately 38 percent of Sebastian’s survey respondents believed artificial intelligence will significantly increase human efficiency, almost 88 percent of the respondents feared that chatbots will be used to collect personal information and manipulate users. By providing automated coding and scripting skills, chatbots reduce barriers to entry that may deter would-be cybercriminals from carrying out sophisticated attacks, adds To ensure that AI chatbots are effectively regulated, Sebastian recommends that regulators implement data protection laws to protect users’ privacy by regulating the collection and use of data generated through the AI system. In addition, regulators could use intellectual property laws to protect against the unauthorized use of copyrighted material by chatbot programs, suggests Sebastian.
  • Existing copyright law should be extended to cover the outputs of generative AI programs, argues Mark Fenwick of Japan’s Kyushu University Graduate School of Law and Paulius Jurcys of Lithuania’s Vilnius University in a working paper. Fenwick and Jurcys acknowledge widespread concerns that the existing regulatory framework may not be fit to govern the rapid growth of AI and its use in building creative works, such as visual art or music. Fenwick and Jurcys suggest that the concept of originality—the legal standard by which courts grant copyright protection only to novel works—could provide the basis for a robust regulatory framework governing the use of AI in the creative field. The alternative—denying copyright protection to AI-generated creative works altogether—is both unnecessary and inadequate, conclude Fenwick and Jurcys.
  • In a working paper, Philipp Hacker of the European New School of Digital Studies analyzes the European Union’s proposed liability frameworks for AI. The Artificial Intelligence Liability Directive (AILD) and Product Liability Directive (PLD) aim to impose liability for harms caused by AI and work together with the AI Act to mitigate potential harm caused by AI, Hacker explains. The AILD imposes fault-based liability to allow victims of AI damage to bring suits, and the PLD proposes strict liability for digital product defects, Hacker argues. The two different liability directives, however, fail to create a “uniform framework for AI liability,” Hacker contends. Hacker argues in favor of a regulatory framework that incorporates a cohesive liability framework for AI developers, which could include safe harbors and subsidized liability insurance for AI creators.
  • In a forthcoming article for the University of California College of the Law, San Francisco Journal, Roee Sarel of Germany’s University of Hamburg evaluates whether ChatGPT should be regulated, and if so, whether restraints should be imposed through public or private law. Legal intervention is economically justified when an activity harms third parties, Sarel argues. Sarel contends that one societal harm ChatGPT could cause is the spread of misinformation, so legal intervention around ChatGPT may be justified. Sarel argues that imposing liability through tort law may be a more effective way to prevent harm, depending on the likelihood that victims file suits against ChatGPT creators and the developer’s ability to pay damages. Sarel concludes that current efforts to regulate AI fail to address incentives and market efficiency, so lawmakers need to draft future policy with these law and economics principles in mind.
  • In a recent article published in the Hausfeld Competition Bulletin, Thomas Höppner and Luke Streatfeld explore what ChatGPT and other generative AI systems mean for antitrust and regulatory lawyers in a competitive digital market. Höppner and Streatfeld argue that AI has certain capabilities—such as unequal access to proprietary resources—that enable it to inflict significant damage on third-party businesses. Platform usage fees and conditions placed on business users allow further exploitation, leverage, and market entrenchment by AI applications, Höppner and Streatfeld explain. They contend that, compared to the European Union’s regulatory and antitrust laws, the United States is less prepared to meet the challenges of AI, as neither the proposed American Innovation and Choice in Online Markets Act nor the proposed Open App Markets Act contain specific provisions on AI. Höppner and Streatfeld note that this gap places American regulators in a weaker position to address competitive advantage concerns.
  • Both the European Union and the United States suffer from a common misstep in their AI governance processes: targeting immediate rather than long-term risks, argues Noam Kolt of the University of Toronto in a forthcoming article for the Washington University Law Review. Kolt contends that both regions’ regulatory proposals fail to address the potential for future harms, including uncontrolled AI proliferation, malicious AI usage, and the system-wide social and political damage that could follow. Considering AI’s continuing development, Kolt proposes a policymaking roadmap for “algorithmic preparedness” based on five guiding principles. Aimed at mitigating social harm, he explains, these principles include long-term harm mitigation, compilation of a portfolio of diverse and uncorrelated regulatory strategies, scalability, continual assessment, and greater attention to worst-case outcomes in the cost-benefit analysis of AI governance interventions.