The Political Limits of Algorithmic Governance

Algorithmic efficiency may undermine the very liberal democracy it was meant to improve.

In a recent op-ed in The New York Times, Eric Schmidt and Andrew Sorota describe Albania’s new effort to automate public procurement. The Albanian government has announced that all supplier contracts—worth more than a billion dollars annually—will be selected by an artificial intelligence avatar. The goal is to eliminate corruption and bias by transferring discretion from people to code.

It is easy to understand the allure of algorithmic systems that offer impartiality, speed, and efficiency. But Schmidt and Sorota worry that when democratic institutions falter, citizens may be too quick to welcome “algocracy”—that is, rule by algorithm—without protections for transparency or mechanisms to challenge algorithmic decisions. Without these safeguards, they predict, citizens will feel wronged and without recourse.

There are indeed many different problems that might affect algorithmic systems in government. Critics point to algorithmic bias that reproduces existing social inequalities, mistakes that occur at catastrophic scale, or lack of public acceptance when systems operate as black boxes. Many of these problems could be addressed through better design, and where they cannot, their costs have to be balanced against their benefits in terms of consistency, cost, and administrative efficiency.

But there is a deeper risk associated with algorithmic governance: Too much efficiency might simply be inconsistent with liberal democratic government.

Liberal democracies operate on friction. Power encounters resistance—procedural, bureaucratic, and moral—within institutions staffed by human beings, who argue with each other, talk back to their superiors, and slow walk decisions they oppose. These institutions are not merely inefficient administrative machinery. They are sites of genuine disagreement where officials with different perspectives, incentives, and judgments fight things out. A liberal state depends on this capacity for internal resistance.

The U.S. Constitution was designed to create frictions and make it difficult to exercise government authority. This structural impediment can be a source of frustration, but it serves as the bulwark against tyranny. In separating powers and constructing checks and balances, the founders assumed that human institutions, with their inherent friction, would provide natural resistance to concentrated authority.

The founders, however, did not have to grapple with the prospect of algorithmic governance. Technology is changing the fundamental assumption at the time of the founding, and we need new political theories to address this reality.

The standard principal-agent problem assumes that agents should faithfully implement the principal’s preferences. But in a liberal democracy, even an elected legislature or executive should face resistance from the institutions of government. Bureaucrats who raise objections, judges who block executive orders, and administrators who demand justification all serve to slow the translation of political will into state action. This is not a bug in the system—it is a feature that protects against hasty, ill-considered, or tyrannical exercises of power.

The relevant distinction here is between algocracy and algoarchy. We should reserve the term “algocracy” for systems where algorithms support public administration within the context of genuinely human institutions. In an algoarchy, by contrast, these sites of resistance have been eliminated. The result is a seamless flow of authority that exponentially increases the power of the state while simultaneously concentrating it in a small number of hands.

Avoiding an algoarchy requires more than the proverbial human in the loop. A legion of isolated workers in cubicles clicking “approve” maintains only formal human control. What really matters is preserving human institutions where genuine contestation and resistance are not only possible but common.

The appropriate role for algorithmic governance depends heavily on context. Albania’s procurement system may be a case where greater reliance on algorithms is defensible. The country faces a recognized problem of corruption in procurement. At first glance, procurement decisions seem not to involve fundamental moral or political questions—they appear to be technical exercises in matching bids to requirements based on cost, quality, and capacity. Turning over decisions to an algorithm appears to offer substantial benefits at low costs.

But even procurement can be political. If a governing coalition locks in an algorithmic procurement system, even one based on ostensibly neutral criteria, it might restrict opportunities for certain forms of resistance. An official who steers a contract to a family member is engaged in corruption. Contracts that flow to emerging communities as part of their integration into the political system serve a different function. An algorithmic system may or may not be able to distinguish between these cases. Procurement decisions allow decentralized actors to respond to community pressures and forge alliances that may challenge the governing coalition. An algorithm designed to eliminate corrupt favoritism may also disrupt this form of distributed political power.

In criminal sentencing, algorithms promise consistency and efficiency. These are incredibly important in a system where random assignment to a particular prosecutor or judge can mean the difference between years in prison and supervised release. But reliance on algorithms might reduce the space for prosecutors to respond to public demands for leniency or severity based on changing community norms or constrain criminal defense lawyers who work the system on behalf of disadvantaged defendants. These sources of friction are not flaws. They represent the criminal justice system’s capacity to adapt to moral complexities that cannot be captured in advance by any set of rules.

Other administrative processes pose similar tradeoffs. Algorithmic systems for disability benefits promise faster, more consistent decisions. But they risk eliminating the power of the caseworker to respond to claimants’ lived experiences. In environmental permitting, algorithmic systems could accelerate reviews and ensure regulatory consistency. But they could also eliminate the discretion that allows agency officials to respond to informal community concerns or cumulative impacts.

As algorithms become more widespread within government institutions, there are two basic challenges. One is to design better algorithms—increasing transparency, reducing bias, and generally delivering greater benefits at lower costs. These are active areas of research where important progress is being made.

But even the best algorithms imaginable cannot address the second challenge, which is to safeguard human institutions where people argue with each other, consider and reconsider, and push back against the commands of higher-ups. Indeed, as the algorithms improve, it will become increasingly difficult to resist the pressure to deploy them in an ever-expanding range of domains.

But resist we must. Some government functions can tolerate algorithmic optimization, but others must retain their inefficient human character. In automated systems, there is no one to persuade, no forum to fight back, and no mechanism to shape state power short of rewriting the algorithm. An algorithm will always salute sharply and carry out its orders—but when it comes to the exercise of state power, we sometimes need to preserve the human capacity to disobey.

Michael A. Livermore

Michael A. Livermore is the Class of 1957 Research Professor of Law at the University of Virginia School of Law