What Do U.S. Courts Say About the Use of AI?

Font Size:

An analysis of state and federal court decisions uncovers standards to guide governmental use of artificial intelligence.

Font Size:

Governmental bodies increasingly rely on artificial intelligence (AI) to handle public functions in a variety of settings, including social services programs, public resource allocation, and enforcement activities. AI has also proven, however, to be unsafe in certain circumstances revealing embedded patterns of gender, racial, and income discrimination, as many scholars have noted.

In response to concerns about governmental use of AI, President Biden in late 2023 issued Executive Order 14,110, which outlines a series of measures designed to advance equity and protect civil rights in the implementation of algorithmic systems. Most recently, the White House has sought to address the risks of AI use by the federal government by issuing a directive calling for federal agencies to designate chief AI officers, develop AI policy compliance plans, and release annual reports on AI use cases.

Meanwhile, courts across the country have already started to address the challenges arising from the use of algorithmic systems by public and private sector entities. Although courts have been playing a crucial role in providing timely and effective responses to these challenges, their role has too often been overlooked.

In a recent study, I systematically analyze how courts have been dealing with litigation involving the use of AI by governmental bodies, drawing on a sample of cases filed in the United States at both the state and federal level since 2010. The cases were primarily selected from the AI Litigation Database and the AIAAIC Repository.

My analysis of a subsample of 44 AI-related court cases leads to an overarching conclusion: Judicial decisions are almost exclusively based on procedural grounds, and specifically, they center on concerns about due process infringements.

More importantly, my analysis identifies six common procedural violations that courts have identified when governmental entities rely on AI. These findings generate a checklist of six minimal requirements that any governmental body should satisfy to shield their use of algorithmic systems from judicial review.

Requirement 1: Adequate Notice and Explanation. Algorithmic decisions by governmental actors must be supported by clear communication that provides reasons for any decision supported by AI in terms sufficiently comprehensible to claimants. Notices must offer detailed information enabling affected people to identify potential errors and decide whether to pursue corrective action. In addition, proper information about data and algorithms employed in generating adverse decisions should be provided.

Requirement 2: Contestability. Algorithmic decisions by governmental actors must be accompanied by a meaningful appeal remedy, allowing individuals to challenge errors or contest adverse decisions. Affected people should receive adequate notice and information about how to contest algorithmic decisions, along with a reasonable time frame for filing appeals. Individuals impacted by adverse algorithmic decisions should also be equipped with sufficient information and assistance during the appeals process.

Requirement 3: Right to Access Information. Affected individuals must be granted the right to access relevant information, including data and technical information used by governmental actors in making algorithmic decisions. Although the specific amount or type of information that should be made available may vary, the final goal is to ensure access to enough information so that an individual may understand and challenge a decision.

Requirement 4: Human Oversight. Affected individuals must be granted the right to human oversight of governmental algorithmic decision making, which can manifest in two primary forms. One form holds that states must ensure human involvement during the algorithmic decision-making process. Alternatively, states must provide affected individuals the opportunity to present their cases to public employees or receive an immediate human review of an adverse algorithmic decision before implementation.

Requirement 5: Notice-and-Comment Procedures. When the use of an algorithmic system meets the definition of a legislative rule, governmental bodies must respect the informal rulemaking procedures prescribed by any state or federal administrative procedure law. Complying with notice-and-comment procedures ensures public engagement in the deployment of AI systems. Public participation is essential to legitimating algorithmic decision-making, promoting awareness, and enhancing trust in the use of algorithmic systems.

Requirement 6: Assessment Procedures. Governmental bodies must ensure proper assessment procedures concerning the introduction and functioning of an algorithmic system, along with the proper engagement of users and affected stakeholders. In addition, governmental bodies should periodically undergo independent auditing to monitor unintentional biases or erroneous results in algorithmic decisions.

It is worth noting that these six procedural requirements are substantially aligned with the safeguards outlined in Executive Order 14,110 and a recent White House directive on AI use. The checklist that emerges from reading U.S. case law on AI can also serve as a compass for governmental bodies aimed at ensuring future compliance with the standards crafted under the executive order.

Moving forward, governmental bodies should also leverage their bargaining power in procurement procedures to negotiate contractual terms that require private vendor compliance with these requirements. Government contracts should, for example, provide for agency access to information, contractor implementation of assessment procedures, periodic algorithmic audits, and appropriate engagement with public stakeholders.

By handling real cases involving the use of AI, courts’ decisions offer invaluable insights into how governmental bodies should approach their use of AI in practice. We should not overlook what the courts have to say in the increasing number of disputes revolving around algorithmic tools.

Giulia G. Cusenza

Giulia G. Cusenza is a lecturer at the University of Udine (Italy).