Building Empathy Into an Automated State

Font Size:

A government that increasingly operates on the basis of artificial intelligence will still need to supply human empathy.

Font Size:

Administrative agencies today undertake a range of activities—granting licenses, issuing payments, adjudicating claims, and setting rules—each of which traditionally has been executed by government officials. But it is neither difficult nor unrealistic to imagine a future in which members of the public, when they interact with the government, increasingly find themselves interacting predominantly with digital systems rather than human officials. Even today, the traditional administrative tasks for which human beings have long been responsible are increasingly augmented by computer systems.

If many of the tasks that government currently completes through decision-making by human officials come to be performed entirely by automated tools and computer systems, how will administrative law respond to this transformation to an automated state? How should it?

Most existing administrative law principles can already accommodate the widespread adoption of automation throughout the administrative state. Not only have agencies already long relied on a variety of physical machines that exhibit automaticity, but an automated state—or at least a responsible automated state—could be thought of as the culmination of administrative law’s basic vision of government that relies on neutral public administration of legislatively delegated authority.

Nevertheless, even within an otherwise responsible automated state, there will come to be an important ingredient of good governance that increasingly could turn out to be missing: human empathy.

Today, bureaucracies comprising human officials can themselves be cold and sterile. But an era of extreme automation could present a state of crisis in human care—or, more precisely, a crisis in the lack of such care. In an increasingly automated state, administrative law will need to find ways to encourage agencies to ensure that members of the public will continue to have opportunities to engage with humans, express their voices, and receive acknowledgment of their predicaments. The automated state will, in short, also need to be an empathic state.

The transition to online interaction with government over the last quarter-century foreshadows what will likely be a deeper and wider technological transformation of governmental processes over the next quarter-century. Moving beyond the digitization of front-end communication with government, the future will likely feature the more extensive automation of back-end decision-making, which today still often remains firmly in the discretion of human officials. But we are perhaps at most a few decades away from an administrative state that will operate largely on the basis of automated systems built with artificial intelligence, much like important aspects of the private sector increasingly will. This will lead to an administrative state characterized by what I have elsewhere called algorithmic adjudication and robotic rulemaking.

Instead of having human officials make discretionary decisions, such as judgments about whether individual claimants qualify for disability benefits, agencies will be able to rely on automated systems to make these decisions. Claims-processing systems could be designed, for example, to import automatically a vast array of data from electronic medical records and then use an AI system to process these data and determine whether claimants meet a specified probability threshold to qualify for benefits.

Such an administrative government might be smarter, more democratically accountable, and even more fair. But it could also lack feeling, even more than sterile bureaucratic processes do today. Interactions with government through smartphones and automated chat-bots may be fine for making campground reservations at national parks or even for filing taxes. But they run the risk of leaving out an important ingredient of good governance—namely, empathy—in those circumstances in which government must make highly consequential decisions affecting the well-being of individuals. In these circumstances, empathy demands that administrative agencies provide opportunities for human interaction and for listening and expressions of concern. An important challenge for administrative law in the decades to come will be to find ways to encourage an automated state that is also an empathic state.

A desire for empathy, of course, need not impede the development of automation. If government manages the transition to an automated state well, it is possible that automation can enhance the government’s ability to provide empathy to members of the public.

Can administrative law help encourage empathic administrative processes? Some might say that this is already a purpose underlying the procedural due process principles that make up administrative law. Goldberg v. Kelly, after all, guarantees certain recipients of government benefits the right to an oral hearing before a neutral decision-maker prior to the termination of their benefits, a right that does afford at least an opportunity for affected individuals to engage with a theoretically empathic administrative judge. But the now-canonical test of procedural due process reflected in Mathews v. Eldridge is almost entirely devoid of attention to the role of listening, caring, and concern in government’s interactions with members of the public. Mathews defines procedural due process largely in terms of factors such as the potential for reducing decision-making error and the government’s interests concerning fiscal and administrative burdens. AI automation would seem to pass muster quite easily under the Mathews balancing test.

This is one way that existing principles of administrative law may fall short in an automated state and where the need for greater vision will be needed. Hearing rights and the need for reasons are about more than just achieving accurate outcomes, which is what the Mathews framework implies. On the contrary, hearings and reason-giving might not be all that good at achieving accurate outcomes, at least not as consistently as reliably developed and validated digital systems. After all, studies have indicated that outcomes decided by human judges can be biased and error-prone. Against the status quo, automated systems promise distinct advantages when they can be shown to deliver fairer, more consistent, and even speedier decisions.

But humans will still be good at listening to and empathizing with the predicaments of those who are seeking assistance from government or otherwise affected by governmental decisions. It is that human quality of empathy that should lead the administrative law of procedural due process to move beyond just its current emphasis on reducing errors and lowering costs.

To some judges, the need for an administrative law of empathy may lead them to ask whether members of the public have a “right to a human decision” within an automated state. But not all human decisions are necessarily empathic ones. Moreover, a right to a human decision would bring with it the possibility that the law would accept all the flaws in human decision-making simply to retain one of the virtues of human engagement. If automated decisions turn out increasingly to be more accurate and less biased than human ones, a right to a decision by humans would seem to deny the public the desirable improvements in governmental performance that AI and automated tools can deliver.

Administrative law need not stand in the way of these improvements. It can accept the use of AI and automation while nevertheless pushing government forward toward additional opportunities for listening and compassionate responses. Much as the U.S. Supreme Court in Goldberg v. Kelly insisted on a pretermination hearing for welfare recipients, courts in the future can ask whether certain interests are of a sufficient quality and importance to demand that agencies provide supplemental engagement with and assistance to individuals subjected to automated processes. Courts could in this way seek to reinforce best practices in agency efforts to provide empathic outreach and assistance.

In the end, if administrative law in an automated state is to adopt new rights, society might be better served if courts avoid the recognition of a right to a human decision. Instead, courts could consider and seek to define a right to human empathy.

Cary Coglianese

Cary Coglianese is the Edward B. Shils Professor of Law and Professor of Political Science at the University of Pennsylvania, where he directs the Penn Program on Regulation and serves as the faculty advisor to The Regulatory Review.

This essay draws on the author’s earlier article, “Administrative Law in the Automated State,” previously published in the journal, Daedulus.