How Will Health Care Regulators Address Artificial Intelligence?

Font Size:

Policymakers around the world are developing guidelines for use of artificial intelligence in health care.

Font Size:

Baymax, the robotic health aide and unlikely hero from the movie Big Hero 6, is an adorable cartoon character, an outlandish vision of a high-tech future. But underlying Baymax’s character is the very realistic concept of an artificial intelligence (AI) system that can be applied to health care.

As AI technology advances, how will regulators encourage innovation while protecting patient safety?

AI does not have a precise definition, but the term generally describes machines that have the capacity to process and respond to stimulation in a manner similar to human thought processes. Many industries—such as the military, academia, and health care—rely on AI today.

For decades, health care professionals have used AI to increase efficiency and enhance the quality of patient care. For example, radiologists employ AI to identify signs of certain diseases in medical imaging. Tech companies are also partnering with health care providers to develop AI-based predictive models to increase the accuracy of diagnoses. A recent study applied AI to predict COVID-19 based on self-reported symptoms.

In the wake of the COVID-19 pandemic and the rise of telemedicine, experts predict that AI technology will continue to be used to prevent and treat illness and will become more prevalent in the health care industry.

The use of AI in health care may improve patient care, but it also raises issues of data privacy and health equity. Although the health care sector is heavily regulated, no regulations target the use of AI in health care settings. Several countries and organizations, including the United States, have proposed regulations addressing the use of AI in health care, but no regulations have been adopted.

Even beyond the context of health care, policymakers have only begun to develop rules for the use of AI. Some existing data privacy laws and industry-specific regulations do apply to the use of AI, but no country has enacted AI-specific regulations. In January 2021, the European Union released its proposal for the first regulatory framework for the use of AI. The proposal establishes a procedure for new AI products entering the market and imposes heightened standards for applications of AI that are considered “high risk.”

The EU’s suggested framework provides some examples of “high-risk” applications of AI that are related to health care such as the use of AI to triage emergency aid. Although the EU’s proposal does not focus on the health care industry in particular, experts predict that the EU regulations will serve as a framework for future, more specific guidelines.

The EU’s proposal strikes a balance between ensuring the safety and security of the AI market, while also continuing to promote innovation and investment in AI. These conflicting values also appear in U.S. proposals to address AI in health care. Both the U.S. Food and Drug Administration (FDA) and the U.S. Department of Health and Human Services (HHS) more broadly have begun to develop guidelines on the use of AI in the health industry.

In 2019, FDA published a discussion paper outlining a proposed regulatory framework for modifications to AI-based software as a medical device (SaMD). FDA defines AI-based SaMD as software “intended to treat, diagnose, cure, mitigate, or prevent disease.” In the agency’s discussion paper, FDA asserts its commitment to ensure that AI-based SaMD “will deliver safe and effective software functionality that improves the quality of care that patients receive.” FDA outlines the regulatory approval cycle for AI-based SaMD, which requires a holistic evaluation of the product and the maker of the product.

Earlier this year, FDA released an action plan for the regulation of AI-based SaMD that reaffirmed its commitment to encourage the development of AI best practices. HHS has also announced its strategy for the regulation of AI applied in health care settings. As with FDA and the EU, HHS balances the health and well-being of patients with the continued innovation of AI technology.

The United States is not alone in its attempt to monitor and govern the use of AI in health care. Countries such as China, Japan, and South Korea have also released guidelines and proposals seeking to ensure patient safety. In June 2021, the World Health Organization (WHO) issued a report on the use of AI in health care and offered six guiding principles for AI regulation: protecting autonomy; promoting safety; ensuring transparency; fostering responsibility; ensuring equity; and promoting sustainable AI.

Scholars are also discussing the use of AI in health care. Some experts have urged policymakers to develop AI systems designed to advance health equity. Others warn that algorithmic bias and unequal data collection in AI can exacerbate existing health inequalities. Experts argue that, to mitigate the risk of discriminatory AI practices, policymakers should consider the unintended consequences of the use of AI.

For example, AI systems must be trained to recognize patterns in data, and the training data may reflect historical discrimination. One study showed that women are less likely to receive certain treatments than men even though they are more likely to need them. Similarly biased data would train an AI system to perpetuate this pattern of discrimination. Health care regulators must address the need to protect patients from potential inequalities without discouraging the development of life-saving innovation in AI.

As the use of AI becomes more prominent in health care, regulators in the United States and elsewhere find themselves considering more robust regulations to ensure quality of care.