
Scholar examines the danger AI poses to financial markets and offers regulatory solutions.
A fake image of a Pentagon explosion generated using artificial intelligence (AI) triggered $500 billion in stock market losses within minutes. Was the incident an anomaly, or does it serve as an indication of the future of financial markets?
In a recent article, Tom C.W. Lin of the Temple University Beasley School of Law examines the emerging risks to financial markets created by AI. Although AI will continue to evolve, Lin argues that market manipulation, misinformation, and misconduct require “urgent action.” He proposes solutions that rely on both public regulators and private sector stakeholders to reevaluate and strengthen existing enforcement approaches.
Misleading investors by making artificial changes to the market value of investments—a practice known as market manipulation—did not begin with AI. Traders spread false rumors in 18th-century Amsterdam coffeehouses to inflate stock prices. Today, however, Lin claims AI enables “bad actors” to move financial markets with greater speed and broader reach.
AI, of course, is no stranger in financial markets. Lin explains how the sector has used AI tools in market transactions for decades. AI enables investment firms to execute trades and detect fraudulent transactions at speeds “beyond human capabilities.” It provides individual investors with accessible ways to adjust portfolios—without relying on human intermediaries.
Analysts estimate that trading accounts using AI perform over 60 percent of all stock transactions in the United States. But Lin points out that the same technology that delivers efficiency, security, and accessibility to financial markets also introduces new vulnerabilities.
A “financial deepfake” refers to a type of manipulated media—images, video, audio, and written content—designed to appear authentic and difficult to detect. Investors, including the growing segment of individual “retail” investors, might see manipulated reports about a company’s success and change investment strategies in response, Lin warns.
Other researchers have reported a 1,000 percent increase in deepfake misconduct incidents between 2022 and 2023. Lin argues that these increases in financial deepfakes could “erode confidence in the integrity of the marketplace,” leading investors to withdraw money from financial markets.
In addition, Lin explains that AI powers autonomous “bots” that amplify misinformation across social media. He warns that malicious individuals equipped with a laptop or a phone can use this technology for free to wreak havoc on individual companies’ share values or the entire marketplace.
According to Lin, AI creates two new systemic threats to market stability: threats that are “too fast to stop” and “too opaque to understand.”
He argues that during market turmoil, AI accelerates volatility faster than traditional market forces, which causes sudden changes in the value and trading volume of investments, often before financial institutions can take preventive action.
Lin explains that AI operates like “black-boxes,” leaving human programmers unable to understand why AI makes trading decisions as the technology learns on its own. He emphasizes that traditional corporate and securities laws were designed to oversee individuals with “discernable bad intentions.” Lin argues that these legal frameworks struggle to police AI because black-box algorithms make autonomous decisions without a culpable mental state.
Lin also notes that AI developments will outpace attempts to develop laws, creating “regulatory lags.” He emphasizes that technology will continue to introduce new principles and capabilities, requiring ongoing revisions to laws that depend on consensus among legislators at each iteration.
After describing the technology, Lin highlights “resource asymmetries” between regulators and private firms that use AI. He contends that large financial institutions have vast resources to develop innovative AI capable of evading detection, while regulators often struggle with outdated technology, limited budgets, and the loss of experienced employees to the private sector through a “revolving door.”
Recognizing these gaps, Lin proposes “regulation by enforcement.” He argues that the U.S. Securities and Exchange Commission and the U.S. Department of Justice should seek enhanced penalties on asset managers, brokerages, and similar financial intermediaries for financial misconduct related to AI while granting leniency to firms that attempt to prevent AI misconduct. A “case-by-case” approach offers greater flexibility than traditional legislative processes, encouraging risk management from financial institutions themselves.
Some scholars warn that regulation by enforcement creates too much uncertainty for firms by imposing uneven penalties, where the same violations can result in harsher sanctions for some firms than for others, eroding confidence in the fairness of financial markets. Lin acknowledges these concerns, noting that “clear, publicly disclosed guidance” should accompany enforcement incentives and penalties.
He draws on the Justice Department’s sentencing guidelines, which assign a “culpability score” to determine the severity of corporate penalties. He argues that a similar framework—where aggravating factors increase culpability and robust compliance programs reduce it—could motivate financial intermediaries to invest in compliance programs that detect and prevent AI misconduct.
Shifting his focus from firms to individuals, Lin urges the private sector to encourage passive long-term investments to retail investors. He notes that this financial strategy promotes diversification, which spreads investments across a variety of assets and shields retail investors from the manipulation of any single stock without the need for government action.
Lin concludes that regulators can incorporate AI into more traditional financial regulation tools, such as a stress test that evaluates financial institutions’ ability to withstand economic disruptions. Regulators can modify these tests to reveal how firms would react to inaccurate data created by AI.
A workable guideline, Lin argues, will maximize the social promises of artificial intelligence without destabilizing the marketplace.


