OpenAI Investigation Puts AI Companies on Notice

Font Size:

The FTC’s investigation of ChatGPT may shed light on how consumer protection law applies to artificial intelligence.

Font Size:

ChatGPT is the fastest-growing consumer application in history, reaching 100 monthly active users just two months after its launch in November 2022. Since then, OpenAI, the $29 billion, Microsoft-backed creator of ChatGPT, has gone on a global charm offensive to influence the future of AI policy.

Despite the rapid proliferation of ChatGPT and other generative AI applications, little has been done at the federal level to rein in the technology. In July, however, the Federal Trade Commission (FTC) opened an investigation into OpenAI. According to a 20-page civil investigative demand (CID), the FTC is primarily interested in whether the AI company has run afoul of consumer protection laws.

The FTC has had its sights set on AI for years now—from biased outputs to exaggerated marketing claims. The investigation of ChatGPT, however, could represent an unparalleled level of disclosure from AI’s posterchild, which has so far stayed tight-lipped about the development and maintenance of ChatGPT. And although the FTC’s policy is to conduct its investigations on a nonpublic basis, Section 6(f) of the FTC Act authorizes the Commission to “make public from time to time” portions of the information it obtains during investigations, such as when disclosure would serve the public interest.

Among the FTC’s dozens of questions to OpenAI, most are focused on how the company collects, sources, and retains data, as well as how it trains ChatGPT and evaluates the accuracy and reliability of its outputs, including an explicit request for additional information about OpenAI’s process of “reinforcement learning through human feedback.” The CID also calls on OpenAI to list all data sources, including websites, third parties, and data scraping tools.

The FTC’s investigation specifically hones in on two consumer protection issues: “reputational harm” and “privacy and data security.” From a privacy and data security perspective, the Commission will likely evaluate OpenAI’s data collection practices and data retention policies, such as retaining private consumer information or mitigating prompt injection risks.

The FTC’s interest in OpenAI’s privacy practices is unsurprising. After all, the FTC has brought many privacy and data security cases in the past several years, especially against tech companies.

Cases involving reputational harm, however, are rare.

With respect to ChatGPT, the FTC’s interest in reputational harm is probably connected to large language models’ tendency to “hallucinate” or generate false information. A persistent concern about ChatGPT since its launch is whether the AI chatbot could be an engine for spreading misinformation. ChatGPT has fabricated fake news articles, scientific research papers, and even judicial decisions.

In June 2023, for example, ChatGPT’s hallucination resulted in a defamation lawsuit against OpenAI when it allegedly spread false information about a Georgia radio host, accusing him of embezzling money. Accordingly, reputational harm may be a pathway for the FTC to assess OpenAI’s retraining and refining process as it relates to AI hallucinations as well as procedures for addressing outputs that make “false, misleading, disparaging, or harmful statements” about individuals.

Although there is understandably plenty of excitement around what the investigation into OpenAI will reveal, the investigation has already provided a preview of what AI issues are top of mind for the FTC. The results of this investigation will offer a glimpse into how large language models, and AI technology more broadly, fit into the FTC’s consumer protection framework, particularly through the agency’s “unfair or deceptive acts or practices” lens. In fact, the CID as a standalone document provides a high-level checklist of AI governance practices that likely align with the FTC’s regulatory expectations. The demand document effectively puts other AI companies on notice by outlining what practices and procedures should be in place.

The United States lags behind other countries in AI regulation, especially when it comes to privacy risks associated with the technology. The FTC’s investigation into OpenAI is now the best opportunity to establish guardrails for AI products. The FTC has the authority to impose monetary penalties, but the agency is also empowered to establish requirements to prevent companies from engaging in unfair or deceptive practices.

For example, the FTC’s 2019 settlement with Facebook not only imposed a historic $5 billion penalty but imposed new privacy requirements, such as exercising greater oversight over third-party apps and implementing a data security program. Similarly, in a settlement with Flo Health Inc., the creator of a period- and fertility-tracking app that reportedly sold sensitive health data to third parties like Facebook and Google, the FTC required the company to notify its users about the disclosure of their health information and instruct third parties that received users’ health information to destroy that data.

Opening an investigation does not automatically result in important revelations. Still, this is the closest the public has gotten to peeking behind the curtain of ChatGPT. Depending on how the investigation into OpenAI unfolds, the result could provide an operationalized framework for algorithmic transparency in the United States.

Ultimately, the FTC investigation is a welcome development, particularly when compared to the slower response from U.S. lawmakers, who have voiced challenges in balancing innovation and regulation. Unless we are prepared to have history repeat itself, innovation should not come at the expense of consumer safety.

Patrick K. Lin, the author of Machine See, Machine Do, writes about artificial intelligence, privacy law, and technology policy.