A Right to a Better Decision

Font Size:

Public preferences for human decisions may give way in time to calls for governmental decisions made by artificial intelligence.

Font Size:

In power centers around the world, policymakers, judges, and lawyers are grappling with the question of what role humans versus machines should play in making governmental decisions. This moment of collective reflection makes as timely as ever an important law review article written by legal scholar Aziz Huq entitled, A Right to a Human Decision. Huq analyzes possible normative justifications for such a right and finds each wanting. He suggests that, instead of insisting on a right to a human decision, we should insist on a right to better decisions—whether by humans or by machines.

Over the last year, calls for a right to human decisions have grown strikingly palpable, as ChatGPT and other large language models have demonstrated remarkable proficiency at tasks long performed only by humans. Lawyers have especially taken note because the version of ChatGPT released in March 2023 passed the uniform bar exam all on its own—and at the 90th percentile.

It was thus hardly surprising that the Chief Justice of the U.S. Supreme Court devoted a key portion of his year-end 2023 report on the federal judiciary to how artificial intelligence (AI) is poised to change the work of courts dramatically. He went out of his way to predict that, even in a world of advancing AI technology, there will continue to be a role for human judges. In doing so, he came close to claiming that people possess a right to a human decision.

Even more clearly did the White House so claim in 2022, in a 73-page document entitled a “Blueprint for an AI Bill of Rights.” This Blueprint declared, among other things, that individuals “should be able to opt out from automated systems in favor of a human alternative”—at least “where appropriate.”

Overseas, the law explicitly enshrines a right to a human decision. Article 22 of the European Union’s (EU) General Data Protection Regulation provides that individuals “have the right not to be subject to a decision based solely on automated processing.” Furthermore, the recent adoption of a new EU AI Act will make some uses of AI by European governments categorically off limits.

These developments make practically salient the question analyzed in Huq’s impressive scholarly article: Should individuals have a right to human decisions by their government?

Although artificial intelligence technology has changed dramatically in the time since Huq published his analysis in 2020—and AI still keeps changing rapidly—his analysis remains the most significant, systematic consideration of the range of possible justifications underlying the so-called right to a human decision. Everyone interested in the role that AI tools might play within courts and agencies must contend with his analysis.

Huq, the Frank and Bernice J. Greenberg Professor of Law at the University of Chicago Law School, begins by laying important groundwork about AI technology itself. He rightly shows that some of the same problems that have been identified with this technology—or, more precisely, the machine-learning algorithms that underlie it—can afflict human decision-making too. For example, one common worry about machine learning is its relative opacity. But Huq doubts “whether a transparency gap exists,” arguing that “it is not clear that complaints about the greater impenetrability of machines compared to humans are well-founded.”

Similarly, it has been objected that machine-learning algorithms fail to provide ready explanations for the predictions they generate. These algorithms do not produce or support reasons in the same intuitive way as do other, more traditional forms of statistical analysis. In a system of government that often is taken to demand reasons for key decisions, the notion that government could rely on so-called black-box algorithms seems anathema to basic principles of due process.

As I have written elsewhere, existing legal requirements for due process are actually pragmatic ones, with considerable flexibility in how much and what kind of explanation must accompany different decisions. Huq drives this point home by observing that many consequential governmental decisions are never accompanied by—nor expected to be accompanied by—any set of reasons: “From street stops to certiorari denials, there are many discrete state interventions within and beyond the adjudicative context that typically lack an explicit justification.” He further notes that Congress and state legislatures routinely pass laws that “can fashion extensive changes to social realities without offering anything by way of adequate normative justification.”

Huq concludes that “the distinctions between human and machine decisions are less crisp than might first appear.” In doing so, he also reminds us that any grounding of a right to a human decision must be based on some kind of a comparative assessment of how machine learning stacks up against human decision-making. As I have put it elsewhere, the question is whether the human “algorithms” that drive our individual and collective judgments do better than digital ones (and vice versa). Relatedly, we should ask, in Huq’s words, whether “the flaws of machine learning are easier to identify and remedy in practice than the flaws of its human analog.”

If machines turn out to work better along relevant dimensions, then surely that provides at least a prima facie case against any right to a lower-quality form of decision-making by humans. As Huq rightly puts it, the work of administrators and other government officials instead “should be aimed at eliciting improvements in state action.” He continues:

From a dynamic perspective, the space for improvement in machine decisions provides a threshold hint that a right to a human decision might risk stymying beneficial institutional changes. And at least absent some reason to think that machine errors are irremediable in a way that human errors are not, there is no reason to prefer the latter.

In other words, even if automation has its flaws—which Huq does not for a moment deny—we should be aiming for “better machines rather than their substitution with humans.”

To be clear, Huq is not advocating immediately or indiscriminately swapping out human decision-making with automated decision-making across all governmental functions. He concedes in this article that there could be some decisions for which machine-learning algorithms cannot perform as well as humans, such as those that involve making fundamental value choices. But he also sees no reason to enshrine human decision-making, with all its foibles and faults, in any legally protected right.

He carefully considers several “clusters of reasons” that have been, or could be, put forth to support a right to a human decision. But for each of these, he finds that “efforts to derive a right to a human decision from normative first principles do not succeed, despite the unease that fully automated decision making provokes in many minds.”

He considers, for example, the view that human decision-making will be more accurate and unbiased than AI tools. At best, he concludes this will only be true on a limited and contingent basis. Evidence only keeps accumulating that, with some exceptions such as for sui generis circumstances or fundamental value choices, AI tools “often generate fewer false positives and negatives in the aggregate than most human decision making.” Moreover, legitimate concerns about algorithmic bias cannot be solved just by saying that humans must always make the most consequential decisions—as humans possess their own biases which can be exceedingly hard to root out. Indeed, machine learning’s biased outcomes themselves can derive from the very human biases that are baked into the data on which machine-learning algorithms are trained. With machine learning, at least these biases can be in principle corrected mathematically.

Another possible justification for a right to a human decision might be that engagement with humans is a necessary component of a just and legitimate governmental process. This could be because people simply feel better when they can engage with a human being. But this feeling, Huq argues, is also at best a contingent, even idiosyncratic, phenomenon. He notes that “as a practical matter there may well not be much phenomenological distance between the bafflement an unschooled criminal defendant reasonably feels when faced with the reticulate and complex forms of the criminal justice system and the confusion elicited by an algorithmically derived outcome. For all practical purposes, both are black boxes.”

With automated administrative or judicial systems, people who must interact with them may not feel any more dehumanized than they already do when interacting with overloaded, human-driven bureaucratic systems, in which claimants and accused individuals are too often treated coldly and impersonally. On this score, one particular passage from Huq’s cogent article deserves to be singled out and quoted at considerable length. He writes that, with respect to “the idea that personhood is respected more by a human decision maker than a machine,”

in practice, quite the opposite might well be true. Especially in the context of mass adjudicative systems (such as welfare determinations and criminal justice), the experience of going before a human decision maker who rapidly, perhaps summarily, ranks you may be fraught with indignity. Not least, there is the prospect of having one’s flaws aired and evaluated by a powerful stranger. Then, there is the risk that the decision maker may take action against you, perhaps for bad (animus-related) reasons, or perhaps because they simply dislike you. Finally, there is an unavoidable publicity attendant on that human decision that might weigh heavily. In contrast, the impersonal and non-judgmental character of a machine decision might well be more conducive to human dignity than any human-driven process.

Dignity, in short, should not simply be assumed to run with human decision making. Instead, it may well be that in many cases our sense of integrity and standing are best preserved by insulation from human scrutiny. Under plausible empirical assumptions, it may well cut in favor of a machine decision.

In addition, he reasons that any felt “need for human interaction may in turn be a contingent feature of social experience”:

That which strikes us today as dehumanizing or insensitive will appear to our children as merely sensible and mundane. Right now, the demand for human review, in the teeth of its likely costs and available alternative responses, might seem little more than an aesthetic preference about the manner in which one interacts with state actors. I am not sure that is enough to get a right to a human decision off the ground.

Huq does not dwell on it beyond a brief mention of online dating apps, but we can see examples from the private sector where removing humans from certain processes actually appeals to people. The auto sales company Carvana runs television commercials that tout how the “techno-wizardry” built into its app makes it easy for customers to sell their used cars to the company. And when buyers or sellers on the eBay platform encounter disputes with each other, the platform offers them an entirely automated dispute resolution tool. This tool apparently works so well that customers that need to use it are more inclined to return to eBay than those who never have any dispute at all.

Still, perhaps some people do have a preference for interacting with a human being when buying or selling used cars. But just as clearly, many people prefer systems that leave the human out of the loop. It hard to see why the same might not be true for governmental processes. It is not far-fetched to think that substantial portions of the public would prefer automated governmental systems, especially when the human-based alternatives treat them in perfunctory ways or leave them waiting years for a decision, as can occur currently with some administrative adjudicatory processes.

Of course, Huq recognizes that the right to a human decision could possibly be grounded on more than just public preferences. He raises and responds to several other plausible moral bases for establishing a legal right to a human decision—and, after methodically considering each, he persuasively concludes that they fail to provide anything close to what could support a categorical objection to swapping out human decisions for machine-based ones in governmental and legal settings.

With each purported rationale, grounded on some notion that human decision-making is superior, Huq suggests that technology either has closed or is closing the gap. Instead of futilely resisting the advance of AI technology, Huq suggests that we shift our focus. “Where an algorithmic tool is flawed,” Huq goes on to explain, “it does not follow that ex post human review is ‘due.’ Rather, there is every reason to believe that what is ‘due’ is a better machine decision rather than a reliably unreliable human one.”

Although Huq ultimately finds no intrinsic moral objection to government’s reliance on automated systems, he recognizes that limits to their adoption can and should be based on “technical constraints” and “practical grounds.” In other words, we should always ask whether an automated system is actually a better one. And by “better,” Huq means a system that leads to “a well-calibrated machine decision that folds in due process, privacy, and equality values.”

This means, in the end, that the challenge with governmental use of AI tools is to make sure that these tools have been thoughtfully designed, adequately tested and validated, and repeatedly subjected to audits to ensure that any problems that arise can be addressed early. These are, as it happens, some of the basic parameters of the emerging normative structures governing AI in the public sector. Around the world, governments are establishing rules and standards that call for AI tools to be tested and audited thoroughly, especially before they are put into use in consequential ways.

With his analysis of claims to a right to a human decision, Huq has produced a major work of legal scholarship that, even in the face of rapid technological change, is likely to remain relevant and important for years to come. Eventually, though, society may reach a point where calls for a right to a human decision not only fade away but actually are replaced with calls, at least in some instances, for a moral or legal right to machine decisions—ones that are better than those made by humans.

Cary Coglianese

Cary Coglianese is the Edward B. Shils Professor of Law and Professor of Political Science at the University of Pennsylvania, where he directs the Penn Program on Regulation and serves as the faculty advisor to The Regulatory Review.