A People-and-Processes Approach to AI Governance

Font Size:

New White House directives take a management-based approach to governing artificial intelligence.

Font Size:

Europe appears ahead of the United States in terms of regulating artificial intelligence (AI)—or at least that is the widely held impression. The United States is seen as a bit of a digital “wild west,” having yet to adopt comprehensive legislation regulating either public or private sector use of AI. By contrast, the European Union seems much farther along in bringing under regulatory control the technological advances made possible by machine-learning analysis of “big data,” as a new AI Act that the European Commission put forward in proposed form by the in 2021 now nears its final approval.

With this common perception in mind, it might have seemed disingenuous, if not even dishonest, for President Joseph Biden’s Deputy Chief of Staff to declare late in October that the President’s signing of Executive Order 14,110 on “safe, secure, and trustworthy” AI represented “the strongest set of actions any government in the world has ever taken on AI safety, security and trust.”

But technically, Biden’s Deputy Chief of Staff was right, simply because the President’s order was signed before the EU formally agreed to the terms of its AI Act. Moreover, Executive Order 14,110 is decidedly comprehensive in scope. Its 36 pages in the Federal Register call for federal agency action to address potential harm from the use of AI by the private sector, the national security community, and other federal agencies. The order is attentive to a full range of concerns about AI, including safety, transparency and accountability, bias, privacy, and displacement of workers. It requires federal agencies to produce a plethora of reports and guidance documents to address those concerns, as well as, in some instances, to consider adopting new regulations.

Even though it is not legislation, when fully implemented by federal agencies, Executive Order 14,110 appears likely to impose a significant governance framework over AI use in the United States. This is not to say that additional legislation will not be needed, but it is to say that the executive order puts forward a substantial, whole-of-government AI framework. This agency-centered, whole-of-government approach to governing AI makes considerable sense, given the varied uses to which AI tools can be and are applied.

Both the EU’s AI Act and Executive Order 14,110 emphasize two key ingredients needed for the effective governance of AI: people and processes. In various provisions, the executive order evinces this people-and-processes approach to AI governance—or what I have also called a “management- based” approach. This approach demands that private and public sector managers engage in careful planning and assessment during AI tools’ full lifecycle—their design, development, and deployment.

A management-based approach that relies on people and processes is generally warranted when highly varied industries or practices must be regulated because a one-size-fits-all prescriptive fix will not be available. Furthermore, a managerial approach centered on people and processes is especially well suited in regulatory contexts where performance standards and monitoring practices are costly or hard to define. These conditions apply to AI.

The executive order appears to have come none too soon. In addition to the rapid growth in private sector use of AI, a recently updated federal inventory lists more than 700 nonmilitary AI use cases throughout government. Consider just a few examples. AI is powering chatbots to answer questions and facilitate the delivery of government services. AI forecasts tax and regulatory noncompliance to help target federal investigatory resources and support enforcement. And AI is helping manage the quality of agency adjudication. The list goes on. But more to the point, the new inventory represents just a snapshot of an upward trending curve.

Section 10 of Executive Order 14,110 will likely prove to be of the greatest interest to practitioners and scholars of administrative law, as it specifically addresses growing AI use by federal agencies. This section also rightly embraces a people-and-processes, or management-based, approach to AI governance. After all, there is no escaping that “careful decision-making will be needed when administrators are confronted with the choice of whether to replace human processes with digital ones.”

Approximately half of Section 10 is devoted to steps that agencies should take to enhance their human talent pools in the domain of digital analytics. This half of Section 10 is fundamentally the bedrock of the entire executive order—and to AI governance more generally—even though it may be an aspect of the order that administrative lawyers could easily overlook. Without a smart, well-trained, and digitally savvy workforce, whatever standards a legislature or agency might put on the books, they will struggle to take optimal effect without digitally sophisticated personnel who can implement and enforce those standards.

The executive order anticipates that agencies will not only need to develop their own in-house capacity to design, develop, and deploy AI responsibly but that they will also likely acquire much of their AI tools and services by contract from the private market. Even then, the government will need in-house expertise to craft and execute the contracts to ensure that procured AI systems align with agency missions, legal and ethical requirements, and operational needs. Moreover, whether an AI system is developed in-house or procured from outside vendors, the technology will require ongoing monitoring, testing, and evaluation. None of this will be easy. Success will depend on having people in government with sufficient know-how about the responsible use of AI tools.

The Biden Administration should be lauded for recognizing the need to “take steps to attract, retain, and develop public service-oriented AI professionals from across disciplines—including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields.” On the same day as the President signed Executive Order 14,110, the White House launched a newly redesigned AI.gov website that prominently included a dedicated webpage urging tech workers to “Join the National AI Talent Surge”—the digital era equivalent of the iconic Uncle Sam posters urging Americans to sign up for the Army during World War I. Noting that today “the federal government is rapidly hiring talent to build and govern AI,” the new website provides ready links to details and application processes for tech-oriented jobs in the federal government.

The other half of Section 10 approaches the governance of federal agencies’ use of AI by requiring agencies to develop internal processes to ensure AI tools are designed and used responsibly. Covered agencies must designate Chief AI Officers (CAIOs) who are charged with “managing risks from their agency’s use of AI.” (The executive order does not apply to independent agencies and exempts certain national security and intelligence agency functions.) The covered agencies must also establish “Artificial Intelligence Governance Boards” to “coordinate and govern AI issues through relevant senior leaders across the agency.”

In addition, Section 10 instructs the Office of Management and Budget (OMB) to develop “required minimum risk-management practices” for agencies to adopt. This part of Section 10 contemplates required processes for “assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI.” OMB is also responsible for developing AI testing and risk management guidelines for agencies to follow when they rely on outside vendors to develop AI tools.

In conjunction with the release of Executive Order 14,110, OMB issued a separate proposed memorandum that speaks to what will be expected of agencies under Section 10 of the order. OMB has taken the unusual step of inviting public comment on its proposed memorandum.

Given the dynamic nature of AI technology and the considerable stakes it raises, the final OMB memorandum will likely evolve at least somewhat from its current iteration. But as proposed, covered agencies would be required to take proactive steps to track, assess, evaluate, secure, and monitor their use of AI systems.

The OMB memorandum contemplates the use of AI impact assessments, which if conducted well, can help agencies identify and mitigate many types of problems that could arise with their use of AI. The processes around AI impact assessments are, in a certain sense, analogous to environmental impact assessments under the National Environmental Policy Act, although the particulars of AI-related processes will obviously vary due to differences in contexts and policy objectives.

AI-related impact assessments under the memo would call for agencies to document a proposed AI tool’s intended purposes, anticipated benefits, and potential risks. Agencies would also need to establish processes for vetting and overseeing these assessments through a designated agency office. Assessments would need to be especially rigorous for planned agency uses that would be “safety-impacting” or “rights-impacting,” as defined by criteria specified in the memorandum. For rights-impacting uses of AI, for example, agencies would be required to test proposed AI tools to ensure they do not adversely affect some demographic groups over others.

Executive Order 14,110 and OMB’s proposed memorandum remind us that even when it comes to the most advanced digital technologies, governance depends vitally on both people and processes. This is especially true when it comes to AI, which is not a single technology amenable to an easy or quick fix. AI assumes a variety of forms. It can be put to many different uses, and it is developing rapidly. AI’s extreme heterogeneity combined with the highly varied nature of its potential problems justify taking a people-and-processes approach to its governance.

Executive Order 14,110 correctly recognizes that an expanded talent pool in the federal government will be needed to manage AI with ongoing vigilance. That order, much as with the EU AI Act, also signals that the future of AI governance will be management-based, relying extensively on processes of impact assessment and auditing to ensure that digital tools are designed, developed, and deployed responsibly. Ultimately, how well society can reap the benefits of advancing technological innovation in AI while being protected from its harms will depend on both people and processes.

Cary Coglianese is the Edward B. Shils Professor of Law and Professor of Political Science at the University of Pennsylvania, where he serves as the Director of the Penn Program on Regulation and the faculty advisor to The Regulatory Review.

This essay originally appeared in the Fall 2023 issue of the Administrative and Regulatory Law News published by the Section of Administrative Law & Regulatory Practice of the American Bar Association.