
The federal government should push for using AI-based skills assessments for high-demand occupations.
For decades, Americans have treated a college degree as a proxy for competence. Not because it is a perfect signal—far from it—but because we lack better, more scalable alternatives. Artificial intelligence (AI) carries the potential to tear down the so-called paper ceiling of degree requirements and instead allow firms to assess the capabilities of workers quickly and accurately—regardless of whether they picked up those skills through the study of fine arts or the school of hard knocks.
The use of AI-based skills assessments is particularly timely as the labor market becomes more fluid and work becomes more project-based due to technological advances. More and more Americans are picking up side hustles and pursuing independent work to control their schedules, advance in their chosen fields, and leverage new AI tools. Similarly, firms are increasingly hiring for specific, short-term projects rather than onboarding a new employee who may leave in a year, if not sooner.
In this environment, employers complain they cannot find qualified workers. Workers lament that they are locked out of jobs they can do. And everyone pays the price for a labor market that relies on an increasingly unreliable credential.
AI can remedy this information shortage—the ambiguity of whether a degree really signals competency and the uncertainty of whether experience alone can show job readiness.
The fundamental unit of labor matching is skill mastery, not credit hours—that has always been the case, although it is manifested in some fields more so than others. In software, aviation, logistics, and the skilled trades, employers routinely hire based on demonstrated competence rather than degrees.
Canada’s Red Seal Program shows how well-designed, portable assessments can certify skills across provinces and employers. Under that program, tradespeople need only pass an industry’s standardized exam to demonstrate that they are adequately trained in their practice. The program applies to roles in construction, automotive and mechanical work, manufacturing, landscaping, and the service sector. In these domains, the labor market more efficiently facilitates job matching precisely because skills are observable and evaluated.
The United States has never seriously tried to scale that logic across high-demand occupations. Instead, we have defaulted to degrees—expensive, time-consuming, and often poorly aligned with job tasks—because they are the only widely recognized signal available. That default has consequences. It inflates hiring requirements, sidelines capable workers, discourages mid-career transitions, and slows economic adjustment.
Here is where AI may come in: use of new AI-based tools to build credible, job-relevant assessments that employers actually trust, implemented in a way that preserves choice, increases merit-based competition, and diminishes the outsized role of paper credentials that are increasingly poor indicators of capacity.
Whether such tools should be developed and adopted is an open question. Theoretically, firms should be clamoring for tools that help them evaluate applicants, as these tools can expedite a costly, time-intensive process and align a firm’s demands with a worker’s offerings. Yet AI adoption is lagging across the private sector, especially in smaller firms that would likely benefit most from lowering the search costs for the best employees. That is where the federal government can come in, both as an accelerator of applicable AI research and as an early adopter of proven AI tools.
One possible path forward: The U.S. Department of Labor could launch a challenge—not a mandate, not a national exam, not a new bureaucracy—inviting industry stakeholders and academic researchers to develop skills-based assessments for a small set of high-demand jobs. Think industrial maintenance technicians, cybersecurity analysts, logistics coordinators, and data analysts. These are jobs where tasks are concrete, shortages are real, and degree requirements often involve more screening than skill identification.
The government would not decide which assessment “wins” in the market. It would not require employers to use any of them. Instead, it would do something much more modest, and much more powerful: identify assessments that are safe, secure, and effective so better signals can emerge for broad adoption.
A well-designed challenge would insist on a few core principles. First, job relevance: Assessments would have to evaluate real tasks—troubleshooting a system, analyzing a dataset, responding to a simulated incident—not abstract trivia that AI tools can handle with ease.
Second, employer validation: Success would not be measured by glossy demos but by whether employers in the relevant field agree that the assessment predicts performance and are willing to use it in hiring or promotion.
Third, portability and openness: Workers should be able to carry results across employers, and no single vendor should be able to lock in the market.
Finally, accountability: If an assessment does not correlate with retention, productivity, or advancement, it fails—no matter how sophisticated the technology.
AI makes this feasible in a way it was not before. It can simulate work environments, evaluate outputs at scale, adjust the difficulty as an applicant proceeds through mock assignments, and lower the cost of assessment. Importantly, AI-based assessments would not replace human judgment. Human resource professionals can maintain as many traditional screening tactics as they would like. These AI tests would simply provide yet another source of information about an applicant’s capabilities.
Critics will worry—rightly—about overreach. A federal skills exam invites plenty of Orwellian scenarios, for example. But that is not what I propose. The government already convenes markets when coordination failures block progress: Think technical standards and early internet protocols. The role here would be similar—evaluate the effectiveness of tools and make them generally available but refrain from picking the winners.
Others may ask: Why not leave this to employers alone? In theory, they could solve it themselves. In practice, no single firm wants to bear the full cost of creating a signal that others can free-ride on. That is why degrees persist even when they perform poorly. A time-limited, competition-driven federal challenge can help overcome that collective-action problem without dictating outcomes.
Done right, this approach could lower barriers to entry, speed up hiring, and make mid-career pivots more realistic. It could help veterans translate experience into civilian jobs. It could give workers a way to prove what they can do—today—rather than what box they checked years ago. And it could inject long-overdue experimentation into a credentialing system that has become rigid by default.
If the assessments fail—if employers ignore them, if they do not predict performance, if they do not improve hiring or mobility—the program should end. That outcome would still be valuable. It would tell us that degrees remain the least bad signal we have in certain domains. But if even a handful of assessments succeed, the payoff could be substantial: faster hiring, lower barriers to entry, more realistic mid-career transitions, and less reliance on blunt educational screens that often do little more than ration opportunity.
We have spent years debating whether college is “worth it,” whether employers are too demanding, and whether workers need more training. Those debates miss a simpler bottleneck. The U.S. labor market is stagnant, at least in part because we are bad at recognizing learning and skill adoption when it occurs outside traditional pathways.
The Labor Department cannot solve that problem alone. But it can help surface answers—by convening, testing, measuring, and then getting out of the way. At a moment when technological change is accelerating and career paths are becoming less linear, building better signals of skill is not a radical departure from past policy. It is a pragmatic update to the way we match people to work.



