
HHS’s new artificial intelligence strategy tests management-based regulation.
Regulators have spent years telling everyone else how to manage artificial intelligence (AI) risk. Recently, the U.S. Department of Health and Human Services (HHS) has become one of the first large departments to turn that advice inward. Its new AI strategy is a public attempt to govern a rapidly growing portfolio of AI tools inside one of the biggest organizations in the federal government.
On December 4, 2025, HHS released a 20-page AI strategy and companion plan that promises to make AI a “practical layer of value” across internal operations, research, and public health work. In a press release announcing the strategy, HHS called this document the next phase of an initiative to make AI available across the federal health workforce and to “Make America Healthy Again.” The strategy is organized around five pillars: governance and risk management, shared infrastructure and platforms, workforce capability and burden reduction, “gold standard” research, and modernized service delivery. Previous news reporting underlines the sensitivity of the health data that these systems will interact with and notes that HHS expects about a 70 percent increase in AI projects in fiscal year 2025.
HHS is not acting in a vacuum. President Donald J. Trump’s America’s AI action plan and the Office of Management and Budget’s memo on accelerating federal use of AI push agencies to adopt AI while building internal governance structures and inventories for systems that affect safety or rights. A more recent executive order seeks to create a unified national AI policy. HHS’s AI strategy, together with its recent AI compliance plan, is one of the first detailed glimpses of how a large department is translating those instructions into an internal governance regime.
One way to understand the HHS strategy is as a live test of management-based regulation inside government, a style of regulation that requires organizations to build their own internal risk management systems rather than simply comply with detailed prescriptive rules. In an essay in The Regulatory Review, Cary Coglianese argued that management-based regulation is especially well-suited to artificial intelligence because models, data sets, and uses change quickly. Rather than dictating fixed limits for each algorithm, regulators can require organizations to create risk management systems with impact assessments, documentation, audits, and continuous monitoring. At the same time, Coglianese warned that these regimes are fragile if oversight capacity is weak, and their requirements can slide into box-ticking rather than meaningful controls.
Seen through that lens, the details of the HHS strategy become more interesting. The first pillar promises the creation of an AI governance board, inventories of AI use cases across the department, and explicit criteria for identifying high-impact systems. The strategy pledges that such systems will undergo documented assessments, independent review, pre-deployment testing, and monitoring. It links this internal framework explicitly to the National Institute of Standards and Technology’s AI Risk Management Framework, which calls on organizations to establish structures and processes for mapping, measuring, and managing AI risk across the lifecycle of AI systems.
The strategy’s language of pillars, metrics, and continuous improvement also echoes an AI management standard put forth by the International Organization for Standardization and the International Electrotechnical Commission, two international bodies that publish voluntary technical and management system standards used around the world to guide organizations on best practices. That standard encourages organizations to treat AI governance as a full management system with cycles of planning, doing, checking, and acting that cover the lifecycle of AI systems and their supply chains. HHS is not seeking certification, which would involve an external audit of its AI governance processes, but it is clearly building something that resembles an AI management system inside a department rather than leaving AI governance entirely to individual programs.
The critical question is whether that system will make the most important policy choices in HHS’s AI systems visible and contestable. In their essay in The Regulatory Review, Abigail Jacobs and Deirdre Mulligan have argued that apparently technical settings inside AI tools are in fact policy decisions. These decisions include defining a complaint, thresholds for risk scores, and tradeoffs between false positives and false negatives. Unless there are tools and practices that surface these embedded choices, they can bypass the usual channels of administrative law and public scrutiny.
HHS’s strategy responds to this concern in part. It promises plain language public summaries for high-impact systems and significant waivers, and it discusses metrics for transparency and reproducibility. At the same time, external commentators have raised questions about the strength of the department’s safeguards for sensitive health data. What the strategy does not fully explain is how deeply those summaries will reach into design choices, especially where HHS is adopting AI tools from external partners rather than building them internally. Without something like the measurement modeling approach that Jacobs and Mulligan described, there is a real risk that key AI design decisions will remain buried in technical documentation instead of appearing in the reasons agencies publish.
The strategy’s use of metrics shows the same tension. HHS proposes collecting information about the effectiveness of AI projects using indicators such as the share of high-impact systems that have completed independent review or the average time to respond when AI tools malfunction. The agency points to example metrics for each pillar in the strategy. These indicators are useful for oversight bodies that want to know whether governance processes are actually happening. They can also create incentives to move quickly through assessments rather than slow down high-risk projects. In a department that expects roughly a 70 percent increase in AI use cases in a single year, those pressures will be real.
From my perspective, as someone who has spent years helping large organizations implement AI-related governance frameworks, the pattern is familiar. The hardest part is not drafting pillars or creating inventories. It is making sure that every new AI proposal automatically triggers the right questions and escalations and that the people around the table have both the authority and the confidence to say no. The more that AI use is encouraged, the more important that ability to stop or redesign becomes.
For other agencies that are now writing their own AI strategies, three lessons stand out. First, HHS is right to frame AI governance as a department-wide concern. Individual program offices cannot build and enforce standards on their own. A central board needs a clear mandate to coordinate infrastructure, data governance, and procurement with risk management.
Second, mapping internal approaches explicitly to the National Institute of Standards and Technology’s AI Risk Management Framework and emerging AI management system standards, such as those of the International Organization for Standardization and the International Electrotechnical Commission, will give courts, auditors, and inspectors general something concrete to test against.
Finally, transparency about the governance process is itself a form of risk management. HHS did not publish only high-level pillars but also an implementation page that explains the governance model and a compliance plan that sets out how it will meet the requirements of the Office of Management and Budget memo on AI use in federal agencies. HHS has given others in the administrative state a starting point. It has also given outside observers a list of questions to ask. How many high-impact systems were actually reviewed last year? How many public summaries were issued? How often did the department suspend a project because assessors were not satisfied?
If management-based AI regulation is going to work broadly, it must work first inside the agencies that advocate it. HHS’s AI strategy is a rare early experiment in building a full AI management system within the federal government. Other departments will look to it for templates and language. They should also treat it as a stress test and improve on it, so that internal AI governance becomes more than a new layer of paperwork with the same old risks.
The author writes in a personal capacity, and his views do not represent any employer or affiliation.



