The Perils and Promise of AI in Regulatory Enforcement

ACUS recommends best practices for AI and algorithmic tools to detect and prosecute regulatory violations.

Federal agencies have increasingly incorporated algorithmic and artificial intelligence (AI) tools into administrative processes to improve efficiency and accuracy. As this change unfolds, the Administrative Conference of the United States (ACUS) has examined various agency processes that employ—or could stand to benefit from—these tools. According to a seminal study written for ACUS by Daniel E. Ho, Mariano-Florentino Cuéllar, and David Engstrom of Stanford Law School and Catherine M. Sharkey of New York University School of Law, the use of AI tools across federal agencies—and the corresponding need for responsible governance and best practices—have grown rapidly.

To help agencies navigate these developments, ACUS has issued recommendations focused on the use of algorithmic and AI tools in retrospective review, notice-and-comment rulemaking, and the provision of legal guidance to the public.

At its 82nd plenary session in December 2024, ACUS adopted Recommendation 2024-5, Using Algorithmic Tools in Regulatory Enforcement, which provides best practices for using AI, predictive analytics, and other algorithmic tools to support agencies’ regulatory enforcement efforts. This recommendation was informed by a report prepared for ACUS by Michael Karanicolas.

Many agencies are responsible for regulatory enforcement—detecting, investigating, and prosecuting potential violations of the laws they administer. As a prior ACUS recommendation explains, agencies often face an ongoing challenge in regulatory enforcement: “assuring the compliance of an increasing number of entities and products without a corresponding growth in agency resources.”

AI and similar tools help agencies face this challenge by increasing their enforcement capacity, usually without commensurate growth in the cost of enforcement. These tools also prove especially useful for time- and resource-intensive tasks such as synthesizing voluminous records, detecting patterns in complex filings, and flagging activities that may require additional human review.

Still, the use of algorithmic tools in regulatory enforcement presents meaningful risks. As ACUS identified in an earlier statement, Agency Use of Artificial Intelligence, agency AI use poses a variety of risks, including limited transparency, inadequate oversight, harmful biases, and the potential for agency personnel to rely too much on decisions made by automated tools or systems.

President Donald J. Trump has made AI policy and development a key priority, especially concerning trustworthiness, security, safety, and economic competitiveness. At the end of his first term, he issued Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which laid out principles for designing, developing, acquiring, and using AI in the federal government. It also required that agencies create and maintain an AI use case inventory to “identify, provide guidance on, and make publicly available the criteria, format, and mechanisms” for agency AI use. At the beginning of his second term, President Trump issued Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence, which emphasized the Administration’s priority to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”

In April 2025, the Office of Management and Budget (OMB) issued memos directing agencies to implement President Trump’s latter order and providing implementation guidance for both orders. OMB outlined best practices such as incorporating AI and similar tools into agency practices to improve public-facing services and the ability of agencies to acquire AI tools responsibly. In both memos, OMB stated its goals for AI deployment by federal agencies: promoting innovation, economic competitiveness, and national security, as well as maintaining strong safeguards for civil rights, civil liberties, and privacy.

Other components of the federal government have also issued guidance and best practices applicable to a wide range of federal agency AI use. For example, the Office of the Director of National Intelligence, General Services Administration, and the National Institute of Standards and Technology have each issued frameworks, best practices, or principles, respectively, to guide responsible agency use of AI and other algorithmic tools. These documents emphasize ethical practices, transparency, and institutional accountability.

In line with these various mandates and guidance documents, ACUS’s Recommendation 2024-5 provides a framework for using AI and other algorithmic tools in regulatory enforcement. It emphasizes the need for administering laws with efficiency, accuracy, and consistency while safeguarding rights, civil liberties, privacy, and equitable access to government resources and services.

To achieve these objectives, the recommendation includes factors agencies should consider when deciding whether to use an AI tool: best practices for risk management and mitigation, including establishing oversight mechanisms and data quality controls; best practices for enhancing transparency and accountability surrounding the use of, and determinations made by AI tools; and best practices for maintaining public trust, such as establishing corrective action processes and opportunities for public engagement by affected parties.

As agencies develop more advanced tools or capabilities and use them to streamline their regulatory enforcement processes, they should continue to look to ACUS for best practices. Although agencies differ in mission and structure, ACUS develops its recommendations to be broadly applicable across agencies. For agencies engaged in regulatory enforcement, the practices recommended in Recommendation 2024-5 will help align their use of algorithmic and AI tools with their statutory mandates while safeguarding the public’s interest.

Kazia Nowacki

Kazia Nowacki is the deputy research director at the Administrative Conference of the United States.

The views expressed in this essay are those of the authors and do not necessarily represent the views of the Administrative Conference of the United States or the federal government.

This essay is part of a series, titled “Toward a More Accessible and Accountable Administrative State.