
Scholar discusses the limits of AI as a tool for prediction and planning.
The New Deal in the United States and the rapid early economic growth of the Soviet Union marked the zenith of early- to mid-20th century belief in state central planning. Yet even at its highest point, the role of state planning provoked substantial debate.
By the turn of the century, opponents of state intervention had largely prevailed in this debate. But the rise of artificial intelligence (AI) has challenged the consensus against the possibility of effective central planning. Some scholars and policymakers now argue that AI could allow state planners to overcome the informational limits long thought to doom such efforts.
In a working paper, Cass Sunstein of Harvard Law School challenges the view that AI can resolve the core problems of central planning. He argues that, like earlier planning efforts, AI-driven planning is limited—in practice and theory—by the impossibility of collecting sufficient data to make accurate predictions.
Sunstein contends that AI prediction faces two key limits. First, he argues that AI will remain unable to make predictions that require the perfect or near-perfect calculation of an impossibly large number of variables. For example, AI cannot yet—and, Sunstein suggests, is unlikely to be able to anytime soon—predict the outcome of a coin flip.
Second, Sunstein argues that AI is even less likely to predict with accuracy outcomes emerging from complex systems—systems in which constituent elements influence one another. Sunstein bases this argument on what he calls the “very strong” claim made by Frederick Hayek, a prominent economist and noted opponent of state planning, in a 1964 essay: the “actual impossibility,” in a sufficiently complex system, of understanding why events unfolded as they did—or predicting what will happen next.
Sunstein illustrates this “actual impossibility” through a series of examples. He begins with an AI-powered prediction challenge involving families with unmarried parents. Drawing on years of data, 160 teams of researchers used machine learning models to predict a range of family outcomes for the following year. Despite an unusually rich dataset, Sunstein notes that the AI models performed “only slightly better” than chance. These results suggest, Sunstein argues, that human lives are shaped by such a vast and interwoven set of factors that much of any individual life is functionally unpredictable.
Sunstein explains that lives are unpredictable because predicting the results of interactions between multiple complex systems is fiendishly difficult. The internal complexity of a system is vastly—often exponentially—increased by its interaction with other complex systems. The difficulty is worsened when outcomes are influenced by social interaction. Sunstein notes that the human combination of internal complexity and social responsiveness helps explain why many social, cultural, political, and economic outcomes are impossible to predict.
Sunstein offers romantic attraction as a case in point. At first glance, the question of whether two people are likely to fall in love may seem narrow—especially compared to broader questions of political or cultural life. Not so, Sunstein insists. The possibility of a “romantic spark,” he notes, depends not only on neurochemistry but also on an untold number of variables—such as the weather that morning or formative childhood experiences—that affect whether two people are both present and receptive to romance in the same moment. The amount of data needed to predict whether two people might fall in love, Sunstein explains, is nearly limitless.
Nonetheless, Sunstein concedes that AI might predict even the most complex phenomena if it were to “know everything about everything.” But Sunstein observes that the sheer volume of data required to predict the behavior of any one complex system—situated within a world of other complex systems—is practically impossible to collect or process. He concludes that, “like central planners, AI will struggle to make accurate predictions, not because it is AI but because it does not have enough data to answer the question at hand.”
Sunstein acknowledges that although AI is unlikely to predict outcomes of complex systems, it can still offer vital predictive insight—under certain conditions, even providing more accurate forecasts than humans alone can make. In some cases, AI may be able to predict when something is truly impossible. Moreover, Sunstein suggests that, even within a complex system, a powerful AI trained on a sufficiently rich dataset might, in principle, offer an accurate probability range for a given outcome.
As AI improves over time, these ranges of probabilities may narrow, Sunstein acknowledges. But he insists that some “prediction problems” involving complex phenomena will, by their nature, foreclose meaningful narrowing.
Sunstein ultimately argues that the longstanding disputes over the feasibility of rational central planning and the “AI Calculation Debate” are one and the same. Given AI’s fundamental limits in predicting outcomes in the most consequential domains of individual and social life—including romance, personal achievement, and political, economic, and cultural outcomes—Sunstein urges caution before treating AI as a reliable planning mechanism, regulatory aid, or predictive model.
Sunstein reminds his readers that AI is a tool—not a panacea for the fundamental problems of governance. Like all tools, AI has limits—and it is least effective when prediction depends on navigating the complexity of social and political life. He concludes that any responsible use of AI—especially in planning and prediction—must begin by taking “AI’s ignorance more seriously.”


