
Scholars examine the dangers of difficult-to-understand AI in criminal investigations and cases.
Artificial intelligence (AI) is becoming increasingly prevalent in American society and has even found its way into the courtroom.
In an article, Brandon L. Garrett, a professor at Duke Law School, and Cynthia Rudin, a professor in the computer science department at Duke University, argue for national and local regulation to ensure that judges, jurors, and lawyers can completely and fully interpret AI used in the criminal justice system.
Garrett and Rudin note a troubling trend toward “black box” AI, which they define as AI models that are too complex for ordinary people to understand. Garrett and Rudin contrast “glass box” AI, which they define as models whose calculations are inherently understandable, allowing people to understand them and the information that they rely upon.
Garrett and Rudin explain that the use of black box AI has increased rapidly in criminal cases, including the use of facial recognition technology, risk assessments of the chances that criminals will reoffend, and predictive policing. Garrett and Rudin argue, however, that because the life and liberty of people are at stake during criminal trials, it is necessary that judges and jurors fully understand the AI used in criminal cases. Absent compelling or credible government interest, Garrett and Rudin explain, substantial constitutional rights and safety interests require that all AI used in the criminal justice system is glass box AI.
Garrett and Rudin note the misconception that there is a tradeoff between black box AI and glass box AI–that black box AI is incomprehensible but also more accurate than glass box AI. Garrett and Rudin cite research showing that in criminal law settings, black box AI does not perform better than models that are simpler and easier to interpret.
Garrett and Rudin argue that national and local regulatory measures are needed to ensure that glass box AI, as opposed to black box AI, is used in the criminal justice system. Garret and Rudin explain that in Europe, the General Data Protection Regulation (GDPR) provides a “right to explanation” for consumers of AI. A companion directive to the GDPR restricts the use of AI in investigations of possible criminal activity and requires assessments for AI’s risk to “rights and freedoms” and data privacy. Garrett and Rudin contend that U.S. criminal defendants deserve similar protections from black box AI.
Garrett and Rudin explain that in the United States, multiple groups have called for bans on facial recognition technology—technology that attempts to identify persons through matching images of a face to a database of collected faces—based on claims that it is unfair and unjust. Garrett and Rudin note that although ten U.S. states have passed restrictions on law enforcement’s use of facial recognition technology, none of those state laws mandate the use of glass box AI for facial recognition technology.
The Federal Trade Commission (FTC) has issued guidance to prevent the use of AI to engage in unfair or deceptive practices in private industry. Although the FTC acknowledges that datasets that train AI models may lead to privacy concerns, Garrett and Rudin note that the FTC has not discussed the possibility of replacing black box AI approaches with glass box AI approaches.
Garrett and Rudin propose enacting legislation requiring that law enforcement agencies that use AI in criminal investigations use glass box AI. Statutes requiring the validation of adequate data for AI used by law enforcement agencies, Garrett and Rudin contend, should also be enacted to ensure that the information and material used to investigate and convict persons are valid. Garrett and Rudin note that no such statutes have been introduced in the United States.
Garrett and Rudin explain that the European Union’s Law Enforcement Directive limits AI’s use in criminal cases and emphasizes the need of AI use to respect accountability, fairness, and nondiscrimination. The directive calls to address AI risks in criminal cases through verification, transparency, careful testing, and explainability. The directive emphasizes that all law enforcement officers’ use of AI is high risk and should be subject to enhanced oversight.
Garrett and Rudin conclude by noting that in limited circumstances the government may have a compelling case to justify the use of black box AI, such as in national security cases. Garrett and Rudin emphasize, however, that the burden should always lie with the government to establish the existence of such a state interest.