Doctors or Devices?

Font Size:

Scholar weighs the possibility of regulating medical artificial intelligence like human professionals.

Font Size:

Artificial intelligence (AI) promises significant changes to how we receive medical advice. Applications like PTSD Coach allow users to input symptoms and receive treatment options, while the popular genetic testing service 23andMe screens for genetic health risks.

AI can lower medical costs and increase access to care. Still, these technologies carry some risk. Should those risks be regulated more like those from medical devices or doctors?

In a recent paper, Professor Jane Bambauer of the University of Arizona James E. Rogers College of Law suggests that regulators should consider treating AI much like they treat human professionals. She shows how the legal duties that govern doctors—namely the duty of competence, the duty of confidentiality, the duty to warn, and duties to avoid conflicts of interest—could be applied to medical AI.

Currently, the U.S. Food and Drug Administration (FDA) regulates medical AI in the same way as conventional medical devices. In essence, that approach involves registering new devices with FDA, submitting them to premarket testing, and, assuming the device is approved, providing continued monitoring once the device is on the market.

An important part of FDA’s risk analysis involves looking at what would happen to consumers if a given device was not available. FDA may pull a device that yields a significant number of false positives if there are more accurate substitutes, but may leave it on shelves if there are no substitutes and the device represents an improvement over no device at all.

Although Bambauer generally approves of this comparison-based analysis, she questions whether conventional regulation of devices provides the best model for regulating medical AI. She notes that, unlike more conventional medical devices, which focus on taking measurements and administering treatments, medical AI often functions more as a knowledge device. It synthesizes data and provides users with medical information and suggestions. As such, it would appear to be more closely analogous to doctors than conventional medical devices, raising questions about how AI might intersect with duties traditionally required of medical professionals.

According to Bambauer, medical AI can most easily satisfy the duty of competence, which “incorporates both the safety and efficacy goals” of the FDA regulatory process. What makes this duty so easily applicable is that it is intended to test the extent to which doctors can memorize large amounts of information and act more like algorithms. As Bambauer points out, both of these tasks are “trivially easy” for computers, and there is little reason to doubt that medical AI will “outperform physicians in many aspects of medical care.”

Despite the relative ease of applying the duty of competence, other duties were “designed for the diffuse and messy organization of human professionals,” not the large, centralized world of technology, Bambauer argues. As such, she contends that those duties may not apply quite as cleanly to medical AI.

For example, although not much of a technological barrier exists to complying with the duty of confidentiality, Bambauer claims that full compliance would dramatically limit the value of medical AI. This is because one of the key benefits of medical AI—namely, its ability to pool and mine data to discover new medical patterns—would appear to run afoul of the Health Insurance Portability and Accountability Act, a law that provides a number of privacy safeguards for medical information.

Of course, another duty doctors must meet, the duty to warn third parties about risks their patients pose to others, also can conflict with the duty of confidentiality. Although this duty is not invoked frequently, medical AI, through more continuous monitoring and large data sets, has a much higher capacity to determine when one person presents risks to others. Bambauer argues that, wanting to appeal to customers’ desire for medical information confidentiality, medical AI companies may simply not look for the kinds of patterns that would reveal these risks despite the clear social value of identifying them.

Finally, Bambauer points to the potential for conflicts of interest with medical AI. Because many of the companies that produce medical AI also produce other medical devices and technologies—IBM, for example—she sees significant concerns about these companies exploiting a relationship with a patient to promote its own products. Of course, as Bambauer points out, avoiding conflicts of interest should not prevent these companies from recommending their own products and services when they are the right ones to suggest.

Rather than offering concrete regulatory proposals for medical AI, Bambauer instead suggests that regulators change their frame of reference for considering medical AI. She argues that the challenges of regulating medical AI lie “not in the quality of the treatment itself, but in ancillary issues” like confidentiality, the duty to warn, and conflicts of interest.