
Experts analyze how federal authorities should manage misleading corporate claims about AI.
Artificial intelligence has become one of the most powerful marketing terms in the modern economy—and one of the most misleading. As firms race to signal that they are AI-powered, federal regulators warn that many of these claims are exaggerated, unsubstantiated, or outright false. In the past year, the U.S. Securities and Exchange Commission (SEC) has brought several enforcement actions against companies that allegedly misled investors about their use of advanced AI systems, a practice the agency has labeled “AI washing.”
AI washing is regulated under federal securities law. The Securities Act of 1933 and the Securities Exchange Act of 1934 prohibit firms from making materially false or misleading statements in communications with investors. The SEC enforces these statutes through antifraud laws, including Rule 10b-5, which bar deceptive statements made in connection with the sale of securities, such as stocks, bonds, and other investments. The agency also requires public companies to periodically disclose accurate information about core business operations, financial risks, and strategic initiatives.
The surge in investor interest in AI-related products and services creates the risk that unverifiable technical descriptions may deceive even sophisticated market participants. As companies increasingly incorporate—or claim to incorporate—AI into their business models, these longstanding regulatory frameworks operate as critical guardrails against the inflation of technological capabilities.
The application of traditional antifraud tools to rapidly emerging technologies raises critical questions. Scholars debate how regulators should distinguish between permissible corporate optimism and deceptive technical claims, especially when companies adopt AI systems in early or experimental stages of development. Criticslso challenge whether existing disclosure rules adequately capture the novel risksosed by AI, such as algorithmic bias, cybersecurity vulnerabilities, and dependence on proprietary data.
Some observers argue that the SEC should issue AI-specific guidance to clarify how companies may appropriately describe their technologies without overstating their capabilities. But others caution that new mandates could discourage innovation by requiring firms to divulge rapidly evolving information that is difficult to evaluate precisely.
How should securities law adapt to prevent AI washing without stifling innovation? The answer to this question, though consequential, remains unclear. Investors rely on accurate disclosures to make informed decisions about where to put their money. Regulators depend on truthful statements to protect markets from fraud. And companies —eager to capture the benefits of emerging technology—increasingly face incentives to claim AI capabilities that they may not fully possess.
In this week’s Saturday Seminar, scholars examine how federal securities law can address AI washing and how regulation can balance investor protection and technological development.
- Inn article, Boyuan Li of the University of Florida analyzes corporate statements and employee data, distinguishing companies’ actual AI use from mere rhetoric. Li notes that the SEC has begun taking action against companies making misleading claims about their AI capabilities, reflecting growing regulatory concerner how businesses represent their AI use to the public. Li observes that AI washing thrives in a fast-moving digital environment where company claims are difficult to verify, and hype can reward bold statements regardless of accuracy. Li calls for enhanced regulatory scrutiny of AI-related corporate disclosures.
- Companiesre increasingly acknowledging AI risks in their mandatory public disclosures, but their warnings are often too vague to be meaningful, Lucas Uberti-Bona Marin and several coauthors from Maastricht University contend in a forthcoming article. U.S. securities law requires public companies to file annual reports disclosing significant risks to investors, the Uberti-Bona Marin team explains. After analyzing over 30,000 such filings, Uberti-Bona Marin and his coauthors found that the percentage of companies mentioning AI risk in their disclosures increased from 4 percent in 2020 to 43 percent in 2024. Most of these disclosures, however, lack detailed plans to address AI risks, Uberti-Bona Marin and his coauthors observe. The Uberti-Bona Marin team recommends that regulators push companies toward specific, actionable disclosures.
- TheEC should require companies to disclose material AI risks using the same framework it applied to cybersecurity risks in 2023, argue Ilan Strauss and several coauthors from the Social Science Research Council in a working paper. Drawing on over 7,800 corporate filings, the Strauss team finds that roughly two-thirds of corporate AI disclosures emphasize benefits but fail to include significant risks, such as systematic failures and service outages. Strauss and his coauthors propose four reforms: SEC guidance clarifying what constitutes a material AI risk; an AI-incident reporting requirement on disclosure forms; an AI governance section in annual filings; and active enforcement against AI washing.
- In an article in the Buffalo Law Review, Chen Wang of the UC Berkeley School of Law critiques the SEC’s recent proposal to requireroker-dealers and investment advisers to eliminate or neutralize—rather than simply disclose—conflicts of interest created by their use of AI. Wang argues that the SEC proposal departs from the agency’s traditional focus on transparency and informed consent, and contends it risks over-regulating AI by defining “covered technology” and “conflict of interest” too broadly. Wang warns that overly restrictive requirements could slow innovation and reduce market efficiency. Given the complexity of AI systems, she advocates instead for a regulatory framework centered on transparency, investor choice, and targeted enforcement.
- In an article in the Ohio State Law Journal, Tom Lin of Temple University Beasley School of Law examines how AI-generated misinformation can be used to manipulate financial markets. Lin notes that AI tools make it much easier for people to create and spread market-moving information, including false or misleading content. Realistic AI deepfakes could be used to undermine public confidence in financial markets, Lin warns. Despite recognizing the challenges of governing technology, Lin argues that regulators should use stronger enforcement tools and clearer penalties to encourage better risk management when companies use AI.
- Although AI washing can bring companies short-term benefits, it can also damage their reputation, erode consumer trust, and misallocate digital resources, argues Nelly Elsayed of the University of Cincinnati in a working paper. Elsayed compares AI washing to greenwashing—when companies exaggerate or falsely claim that their products or practices are environmentally friendly to appeal to investors, regulators, or the public. She contends that AI washing carries similar ethical and economic risks, including consumer backlash and potential regulatory penalties, whichan reduce a company’s market value and stakeholder confidence. To address these concerns, Elsayed recommends that researchers develop standardized frameworks for measuring AI washing and that companies adopt transparency and accountability mechanisms, such as third-party audits and ethical reporting guidelines.
The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.


