FDA Guidance on Clinical Decision Support: Peering Inside the Black Box of Algorithmic Intelligence

by | Dec 19, 2017

Key Takeaways:

  • The FDA has released long-anticipated draft guidance on how they intend to regulate clinical decision support products.
  • For applications with data originating from medical devices, the FDA will continue its oversight, AI or not (e.g., medical image processing).
  • Medical applications that rely on “black box” algorithms unable to be fully understood by the end-user (basically all AI) will be regulated, posing challenges for AI adoption.

Last week, the FDA finally released its long-awaited Draft Guidance on Clinical Decision Support. Following the release, STAT News mentioned experts were disappointed because the agency gave no insight into how it views artificial intelligence. Indeed, a “Command+F” search for “Artificial Intelligence” returns zero results. However, it is unnecessary for the agency to use the term “AI” to provide guidance on how it will consider associated technologies and use cases. The FDA does use the word “algorithm” in its guidance, and although algorithms can vary in sophistication, much of today’s AI technology is based on algorithmic intelligence. The suggestion that the FDA did not address the topic becasue it failed to explicitly mention AI within the document shows the challenges for those unfamiliar with understanding this complex subject.

Nearly all AI will remain under FDA oversight. However…It would be useful for the agency to offer meaningful reference to machine learning or deep learning among the examples of potential use cases.

In fact, the FDA has been reviewing technology with AI components (e.g., rule-based systems, machine learning) for more than a decade. RADLogics received FDA approval for their machine learning application in 2012, widely considered the first AI for clinical use approved by the agency. HealthMyne received FDA clearance for its imaging informatics platform in early 2016. In 2017 at least half a dozen companies received FDA clearance for machine learning applications, including Arterys, the first company to receive approval for a deep learning application, and Butterfly Network, which had 13 different applications approved along with its “ultrasound on a chip” device in late October. Others to receive clearance in 2017 include Quantitative Insights, Zebra Medical Vision, EnsoData and iCAD.

The first indirect reference to products using AI comes in the first paragraph of Section III, in which the agency begins addressing specific examples of companies that will not be exempted from review. Note that the first bolded sentence below is inclusive of nearly every application.

“Under section 520(o)(1)(E), software functions that are intended to acquire, process, or analyze a medical image, a signal from an in vitro diagnostic device, or a pattern or signal from a signal acquisition system remain devices and therefore continue to be subject to FDA oversight. Products that acquire an image or physiological signal, process or analyze this information, or both, have been regulated for many years as devices. Technologies that analyze those physiological signals and that are intended to provide diagnostic, prognostic and predictive functionalities are devices. These include, but are not limited to, in vitro diagnostic tests, technologies that measure and assess electrical activity in the body (e.g., electrocardiograph (ECG) machines and electroencephalograph (EEG) machines), and medical imaging technologies. Additional examples include algorithms that process physiologic data to generate new data points (such as ST-segment measurements from ECG signals), analyze information within the original data (such as feature identification in image analysis), or analyze and interpret genomic data (such as genetic variations to determine a patient’s risk for a particular disease).”

The word “algorithm” is used four times in the document and in each instance the use provides significant insight into the agency’s thinking. The word is first used in the second highlighted sentence above, which provides general examples of algorithms which will continue to be reviewed as medical devices. The guidance goes on in a later section to provide the following more specific examples of algorithms that continue to require premarket approval:

“Software intended for health care professionals that uses an algorithm undisclosed to the user to analyze patient information (including noninvasive blood pressure (NIBP) monitoring systems) to determine which anti-hypertensive drug class is likely to be most effective in lowering the patient’s blood pressure.

“Software that analyzes a patient’s laboratory results using a proprietary algorithm to recommend a specific radiation treatment, for which the basis of the recommendation unavailable for the HCP to review.”

The agency continues to describe the underlying features that must be present for an algorithmically-driven CDS recommendation to be exempted from review, specifically a company must clearly state and make available:

  1. The purpose or intended use of the software function;
  2. The intended user (e.g., ultrasound technicians, vascular surgeons);
  3. The inputs used to generate the recommendation (e.g., patient age and gender); and
  4. The rationale or support for the recommendation.

The first three would seem to be reasonable enough for developers of AI products to provide users, but the fourth is basically impossible. The “black box” nature of most AI systems built using machine learning methods means even leading AI experts cannot unpack an algorithm and fully understand the rationale for a given recommendation, even with full transparency and access to the training data (which is no trivial matter in and of itself).

This is especially clear when taking into consideration additional guidance provided elsewhere in the document regarding software functions that will require oversight:

A practitioner would be unable to independently evaluate the basis of a recommendation if the recommendation were based on non-public information or information whose meaning could not be expected to be independently understood by the intended health care professional user.

Frankly, the agency provided great insight and clarity if you are reading the document to be inclusive of all known AI technologies today. The conclusion is clear that nearly all AI will remain under FDA oversight. However, there are terms that could be used in the final guidance that aren’t buzzwords, such as machine learning, supervised learning and unsupervised learning, among others. It would be useful for the agency to offer meaningful reference to machine learning and/or deep learning among the examples of potential use cases that remain under oversight as medical devices.

In Chilmark’s annual predictions for 2018, we forecast that two dozen companies will receive FDA clearance for products using AI, machine learning, deep learning and computer vision, which would mark a 400-percent increase from 2017. It would be helpful if the agency would create a dedicated channel for engaging companies developing AI products and perhaps even provide guidance on how they evaluate training data sets.

1 Comment

  1. Adrian Gropper, MD

    Nice job, Brian. Black-box algorithms are indistinguishable from snake oil. Making clinical medicine secret and inaccessible to peer review would bring us back to the early 1900’s. FDA guidance would do well to treat open source software differently from secret software and this draft guidance is a step in the right direction.

    Reply
Leave a Reply to Adrian Gropper, MD Cancel reply

Your email address will not be published. Required fields are marked *

Related Content

HIMSS24: Back to Form but Haunted by Change Healthcare

HIMSS24: Back to Form but Haunted by Change Healthcare

Good luck trying to get noticed for anything other than AI or cybersecurity HIMSS24 was the first HIMSS national conference that I will have missed since I first attended in 2012. It felt weird not to be there with all my friends and colleagues, and I certainly missed...

read more
ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

Workforce / capacity issues and AI – and where the two meet – are still the two biggest topics on clinical executives’ minds right now at both ViVE 2024 and HAS24. Probably the first time I’ve seen the same primary focus two years in a row – historically we’ve always seen a new buzzword / hype topic every year…

read more
Powered By MemberPress WooCommerce Plus Integration