Using Algorithms in Governance

Cass Sunstein
Font Size:

Cass R. Sunstein discusses how algorithmic governance can reduce noise in decision-making.

Font Size:

In a discussion with The Regulatory Review, leading regulatory scholar Cass R. Sunstein explains how algorithms can support human decision-makers in reducing errors and removing variability in outcomes between similar administrative adjudications.

Humans are noisy creatures, Sunstein points out. A judge may wake up with a bad back and rule more harshly. Or a judge may have a soft spot for certain claimants and grant leniency more often. A lottery-like system, with outcomes at least partly dependent on which judge is assigned to an individual case, introduces uncertainty and inequity to a legal system, he explains.

But as Sunstein also notes, algorithms are silent. Introducing algorithms into decision-making can remove many of the cognitive biases that humans hold. Algorithms can decrease errors and reduce unpredictable variability from adjudications, thereby improving consistency.

Algorithms have both benefits and drawbacks, though. Sunstein emphasizes that applying classic benefit-cost tests can help evaluate the most efficient uses of algorithms in governance. Algorithms are especially useful when a human decision-maker is more likely to make an error and an algorithm is less likely to make an error. When algorithms are used effectively and perform well, public approval of algorithms in government decision-making should increase, he suggests.

Sunstein is the Robert Walmsley University Professor at Harvard University and the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. From 2009 to 2012, he served as the Administrator of the Office of Information and Regulatory Affairs within the White House Office of Management and Budget.

One of the most-cited legal scholars, Sunstein is an expert in the fields of constitutional law, administrative law, and behavioral economics. His recent research has focused on algorithmic bias, nudges, and how behavioral science can inform governance. He is the author and coauthor of numerous books, including Noise: A Flaw in Human Judgment.

The Regulatory Review is pleased to share the following exchange with Professor Sunstein.

 

TRR: What is the problem of noise in administrative adjudications?

Sunstein: Noise refers to unwanted variability in judgment. Suppose that people claiming social security disability benefits are subject to a lottery. If different judges make different decisions in the same cases, we have noise. Or suppose that whether you get asylum depends in large part on one thing: Who is the judge? That’s noise. Wherever there is judgment, there is noise, and that is true for the judgments made in administrative adjudication.

 

TRR: What are the main sources of noise?

Sunstein: Noise has different sources. One source is that different judges have different “levels.” Judge Smith might be really strict in giving out benefits or in granting asylum; Judge Jones might be a lot more lenient.

Another source of noise is that different judges show different “patterns.” Judge Williams might be reluctant to give out benefits to people claiming depression or anxiety—but not at all reluctant to give out benefits to people who suffer from chronic pain. Judge Johnson might show the opposite pattern.

Yet another source of noise is mood. Mood matters, even within the person. A judge might be strict when tired or cranky, and lenient when energetic and happy.

 

TRR: How can algorithms reduce noise?

Sunstein: Algorithms give the same answer every time, so they cannot be noisy. They do not show variability. They do not have bad moods, and they do not get happy. They might be biased, to be sure. They might make big mistakes. But they are really quiet, in that they do not produce  unwanted variability.

 

TRR: When are algorithms an appropriate mechanism for use in government adjudication?

Sunstein: Let’s start by distinguishing between algorithms as advisors and algorithms as deciders. The second is more controversial than the first, of course.

To know when to use algorithms, let’s turn to some old friends: the costs of errors, and the costs of decisions. Do algorithms reduce those costs? If you have situations in which you know that human adjudicators are error-prone and algorithms are less error-prone, then you should consider the use of algorithms. If you have terrific algorithms that do not make mistakes about the law, you should consider the use of algorithms.

 

TRR: How can government agencies increase public approval of their reliance on algorithms?

Sunstein: To answer that question, we need to know exactly why the public disapproves, when it does. One reason might be a belief that algorithms cannot be trusted. If you can show that algorithms perform well, or better than human judges, public approval should increase.

Another reason might be a failure to compare algorithms with human judges. Maybe human judges err a lot, which would increase the public appeal of algorithms. Comparisons are helpful. You might also try to explain why algorithms work, if they do.

 

TRR: How should society decide which values to input into algorithms used by government agencies? For example, how should they decide how much error in any direction is acceptable?

Sunstein: The best level of error is no error at all. If algorithms can eliminate error, fabulous. If they cannot, we might have to decide whether errors of inclusion are better or worse than errors of exclusion. Suppose people are seeking benefits of some kind: Is it more alarming if deserving people do not get them, or if undeserving people do get them? We might want to get some numbers before answering that question. If an algorithm would give benefits to 1,000,000 deserving people and to three undeserving people, maybe that is okay. If an algorithm would give benefits to ten deserving people and 1,000,000 undeserving people, it is not okay.

 

The Sunday Spotlight is a recurring feature of The Regulatory Review that periodically shares conversations with leaders and thinkers in the field of regulation and, in doing so, shines a light on important regulatory topics and ideas.