Regulating Mobile Medical Applications

Smartphone
Font Size:

As mobile medical applications continue to proliferate, regulators lack a solid framework for oversight.

Font Size:

The U.S. Food and Drug Administration (FDA) regulates over 6,000 types of medical devices with the aim of ensuring their safety and efficacy. Included in this list are standard digital health care tools like blood glucose monitors, but what about the hundreds of thousands of health care “apps” that are now available to consumers?

The prevalence and ubiquity of mobile medical applications (MMAs) suggests a need to regulate to protect the public. Indeed, in a recent metastudy of MMAs, researchers identified a wide range of safety concernsincluding a lack of app responsiveness to identified health dangers—and concluded that the “safety of apps is an emerging public health issue.”

FDA first attempted to delineate a regulatory framework for MMAs through guidance issued in 2013. In light of the “rapid pace of innovation” in app development, FDA updated this guidance late last year to clarify that the agency only regulates a subset of MMAs. Specifically, the agency only regulates apps “that are medical devices and whose functionality could pose a risk to a patient’s safety if the device were to not function as intended.” Therefore, lower risk MMAs that qualify as devices are exempted from FDA premarket review.

As FDA continues to develop a framework for oversight, industry professionals have noted that the agency has taken a hands-off approach for mental health apps in particular. This seminar explores recent works on regulating MMAs, with a focus on special considerations for mental health apps.

Regulating MMAs Generally

  • In a recent article in Bioethics, T.J. Kasperbauer and David E. Wright argue “for more extensive FDA regulation of health and wellness apps that are not intended for medical use.” The current standards provide a loophole by failing to regulate mobile apps that provide some form of medical data, such as FitBit’s Ionic smart watch, as long as they are not “intended for medical use,” the authors argue. These health and wellness apps can pose dangers to consumers by providing inaccurate medical data, the authors explain. The authors concede that “concerns about efficiency and fostering innovation should shape FDA regulation of health apps” but ultimately conclude that “they should not be used to justify complete absence of FDA involvement.”
  • “The FDA’s medical device framework has yellowed with time,” but the agency “is taking bold moves to adapt its old framework to very new products.” Writing for the Yale Journal of Health Policy, Law, and Ethics, Professor Nathan Cortez analyzes three key experimental changes to FDA’s regulatory approach to MMAs, as detailed in FDA’s Digital Health Innovation Plan. First, in the plan, FDA proposes emphasizing post-market review of MMAs over pre-market review. Second, in some cases, FDA has begun reviewing and certifying MMA developers as opposed to the MMAs themselves. Lastly, the agency is experimenting with using independent certifiers to review MMAs to account for the complexity and prevalence of these devices. Cortez cautions that these changes are experimental and should be evaluated accordingly.

Regulating MMAs for Mental Health

  • In an article in the International Journal of Health Policy and Management,  Lisa Parker, Lisa Bero, Donna Gillies,  Melissa Raven, and Quinn Grundy analyze regulation of the mental health app market in Australia. They write that “mental health apps are subject to the same regulatory oversight as apps that focus on somatic health issues, however, while all health data is sensitive, mental health data is particularly so, highlighting the importance of mental health app user privacy.” Based on their research, the authors urge consumer advocacy groups, app developers, health professionals, and governments to work together to develop a comprehensive regulatory framework to protect consumer privacy and safety. 
  • In an article for the Washington Post, Deanna Paul reports on privacy issues that stem from college students using mental health apps to access care on campuses. Although standard medical settings provide protection for an individual’s health information and universities protect some educational information, the Health Insurance Portability and Accountability Act (HIPAA) “does not apply to user-generated data from the platforms, including reality checks, self-assessments and quizzes.” Nor does HIPAA protect “intimate information” like location, sleep, and movement data that an MMA may collect. Considering the fact that many of these apps’ privacy policies are “difficult to locate and challenging to understand,” Paul concludes that consumers are “essentially relinquishing privacy rights” when they use mental health apps.
  • Writing for BMC Medicine, Elena Rodriguez-Villa and John Torous advocate “dynamic and multi-stakeholder evaluation” of mental health apps to help fill the gaps left by FDA’s lax regulatory approach. They assert that a multi-regulator approach is particularly useful for mental health apps since these tools are often excluded from FDA evaluation despite their tendency to “appear medical” or “clinical” to a “reasonable consumer.”  Although Rodriguez-Villa and Torous urge FDA to move forward in developing a more comprehensive regulatory scheme for mental health apps, they also propose developing a “self-certification checklist” to use in the meantime. In this scheme, developers would answer a set list of questions for each app that they create, and these answers would be made available for the public to view and vet. According to the authors, this approach “would increase transparency, engage diverse stakeholders in meaningful education and learning, and incentivize the design of safe and secure medical apps.”