The AI Health Care Dilemma

Font Size:

Scholars explore regulatory approaches to artificial intelligence in the health care sector.

Font Size:

Patient-centered care is a pillar of the American health care system. But, as the U.S. population grows and ages, providers need new ways to manage ever-increasing caseloads. Artificial intelligence (AI) offers an opportunity to improve the efficiency of health care administration, disease diagnosis and detection, drug development, and more.

Although AI is poised to transform business operations across various sectors of the economy, experts agree that it holds heightened potential in the health care industry. AI has aided breast cancer detection in mammograph screening programs and improved diagnosis of anxiety disorders. It has even enabled customized treatments based on an individual patient’s DNA. Efficiency in health care can do more than speed up processes—it can save lives.

So, why are strides made in health AI met with fear?

Health care regulations protect patients from poor quality care, breaches of medical privacy, and inadequate technical standards, but untrustworthy and unregulated AI risks undermining these regulatory safeguards. Regulators, practitioners, and the public are wary of delegating control of patients’ health to technology. Within the healthcare sphere, any error can lead to injury, and AI systems require access to large quantities of patient information. In turn, this data can incorporate biases and perpetuate inequalities.

Recognizing an increasing role for AI in health care, the U.S. Department of Health and Human Services (HHS) established an AI office in March 2021 and appointed its first AI Officer. The regulatory body’s responsibilities include devising a strategic approach to encourage AI adoption, building an AI governance structure, and coordinating HHS’s response to AI-related federal mandates.

Concurrently, the U.S. Food and Drug Administration (FDA) issued an action plan in early 2021 outlining the agency’s goals to encourage AI innovation and update its proposed regulatory framework. FDA has compiled a list of over 500 AI and Machine Learning-enabled medical devices currently marketed in the United States. Still, the complex regulatory landscape remains underdeveloped.

In this week’s Saturday Seminar, experts examine the challenges of regulating AI in the health care sector, considering its applications, legal implications, and potential governance structures.

  • In an article published in the Yale Journal of Health Policy, Law, and Ethics, Sara Gerke of Penn State Dickinson Law advocates a new regulatory framework for AI-based medical devices. Although FDA has issued regulations and guidance to safeguard AI use in the health care sector, Gerke worries that “FDA is not yet ready for health AI.” Concerned that current regulations may undermine patient safety and public trust, Gerke argues for statutory reform and changes to FDA’s premarket pathway for AI-based medical devices. To ensure AI-based devices are safe and effective, contends Gerke, FDA needs to refocus its regulatory oversight and enforcement discretion.
  • The use of personal health information to advance AI innovations in health care raises challenges related to privacy and equity, explain Jenifer Sunrise Winter and Elizabeth Davidson of the University of Hawaii at Manoa in an article published by Digital Policy, Regulation and Governance. Winter and Davidson explain that, although AI technologies can harness personal health data to address important societal concerns and advance health care research, they simultaneously pose new threats to privacy and security given AI’s opaque algorithmic process. Thus, AI requires enhanced regulatory techniques to preserve individuals’ right to control their own personal health information, argue Winter and Davidson.
  • Despite the promise of AI for improved health care outcomes, algorithmic discrimination in medicine can exacerbate health disparities, argue Sharona Hoffman and Andy Podgurski of Case Western Reserve University in an article published in the Yale Journal of Health Policy, Law, and Ethics. According to Hoffman and Podgurski, AI defects can disadvantage certain groups of patients. To address discrimination concerns and ensure appropriate use of AI in medicine, Hoffman and Podgurski recommend creating a private cause of action for disparate impact cases, passing legislation on algorithmic accountability, and including algorithmic fairness in FDA’s oversight standards.
  • In an article published in The SciTech Lawyer, Nicholson Price II of the University of Michigan Law School explores possibilities for regulating AI in the health care sphere to ensure the quality and safety of medical algorithms. Although experts debate whether FDA should classify medical algorithms as medical devices, Price argues that FDA has the authority to regulate AI health care mechanisms, including complex algorithms. Moreover, FDA should analyze the market before and after the medical algorithms are implemented to develop an oversight and evaluation framework, according to Price.
  • The sophisticated nature of AI software presents questions for regulators, according to Sandeep Reddy and several coauthors in an article published in the Journal of the American Medical Informatics Association. It is difficult to determine at which stage to implement monitoring and evaluation of AI-enabled services—approval, introduction, or deployment—because the decision-making processes of algorithms are often inexplicable and dynamic. The Reddy team suggests a governance model emphasizing “fairness, transparency, trustworthiness, and accountability.” For example, regulators can increase fairness by instituting a data governance oversight panel to monitor biases in AI software. To improve trustworthiness, regulators could implement policies to educate both health care professionals and patients on AI as well as mandate that health care professionals seek informed consent from patients before using AI health care software.
  • In an article published by PLOS Digital Health, Trishan Panch of the Harvard T.H. Chan School of Public Health and several coauthors advocate a particular set of regulations for clinical artificial intelligence. Although regulation is necessary to ensure the safety and equitability of clinical artificial intelligence, Panch and his coauthors contend that a system of centralized regulation alone, such as FDA regulation, fails to address the reasons why algorithms might fail and may increase disparities. Thus, the Panch team advocates a hybrid model of regulation, where most applications of clinical AI are delegated to local health systems and the highest risk tasks are regulated centrally.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.