Guidance on Clinical Decision Support: Definitions and Transparency

by | Jan 30, 2020

"Diagnostics" button on the Metallic Keyboard lying on Green Background. 3D.Roughly two years after we reviewed the first FDA guidance on Clinical Decision Support (CDS), FDA issued new draft guidance. The FDA was criticized for not basing their first draft regulation criteria on use-case risk. Since then the International Medical Device Regulators Forum (IMDRF), which the FDA chaired, created a framework for organizing Software as a Medical Device (SaMD) by risk categories. While CDS is a very broad category that encompasses many functions, this guidance could have an outsize impact on the need to provide transparency in AI/ML platforms and algorithms for healthcare.

Guidance and regulation are sorely needed. The criminal settlement for Practice Fusion includes the admission that their CDS alerts were intentionally biased to increase prescriptions of ‘sponsoring’ pharmaceutical products, including opioids. Only the kickbacks the company received were illegal, not the biased functionality. Unfortunately, this draft still offers vague terminology and fails to provide a clear breakdown of what kinds of technology it would regulate.

Key Takeaways

  • A regulatory framework for CDS is overdue. The single largest barrier providers and patients report in adopting and using CDS is trust, and clear regulation builds trust.
  • Many of the distinctions in the guidance are vague and potentially meaningless (e.g., the difference between “informing” and “driving” clinical care). The attempts to clarify these terms in the text only create more confusion and uncertainty.
  • More transparency about how software works is a good idea but may be hard to achieve. Users need to understand how current ML produces recommendations. But developers need to understand what their software must provide to make it “independently reviewable.”
  • This draft is good progress, but the FDA should provide more clarity and structure about what constitutes regulated and non-regulated SaMD.

Guidance and regulation are sorely needed. This month’s criminal settlement for Practice Fusion includes the admission that their CDS alerts were intentionally biased to increase prescriptions of ‘sponsoring’ pharmaceutical products, including opioids. Unfortunately, this draft still offers vague terminology and fails to provide a clear breakdown of what kinds of technology it would regulate.

Device and Non-Device Software

The FDA provides a list of criteria for software to qualify as Non-Device CDS – effectively a non-regulated device.

  1. It should not be “intended to acquire, process, or analyze a medical image or a signal”
  2. It’s “intended for the purpose of displaying, analyzing, or printing medical information”
  3. It’s “intended for the purpose of supporting or providing recommendations to [a provider] about prevention, diagnosis, or treatment”
  4. It’s “intended for the purpose of enabling a [provider] to independently review the basis for the recommendations that such software presents”

Criteria 1 and 2 imply that Non-Device CDS can analyze medical information, but cannot analyze an image or signal. However, a description of “software intended to analyze or flag patient results based on specific clinical parameters” is specifically flagged as a device function, even when “the analysis … summarizes standard interpretation of individual variables that healthcare practitioners could do themselves.” The FDA needs to provide much greater clarity in these distinctions.

Informing, Driving, and Treating

While vendors asked for a risk component, the IMDRF may add more complexity than many expected. That framework lays out three tiers of software recommendations.

  1. Informing clinical management
  2. Driving clinical management
  3. Treating or diagnosing

Software that falls into the latter two categories isn’t defined as CDS at all, meaning they fall outside the CDS exceptions in the 21st Century Cures Act. The draft language, that ‘informing’ means “to provide information, such as treatment or diagnostic options or aggregating clinical information [which] may support a recommendation” while ‘driving’ is “to guide next diagnostics or treatment interventions,” seems arbitrary and subjective. The final category, where software “provide[s] the actual diagnosis or prompt[s] an immediate or near-term action,” is where regulatory oversight should focus.

Black Boxes Are Bad, but Are the Current Reality

The final criteria, reviewability, might be the most important. The FDA says they will choose not to exercise oversight for patient and caregiver products in the Non-Serious Risk category that allow independent review, making it the sole differentiator.

What does that reviewability need to consist of? It should “describe the underlying data used to develop the algorithm and… include plain language descriptions of the logic or rationale used… to render a recommendation.” The guidance says “the sources supporting the recommendation or… underlying the basis should be identified… available to … and understandable by the intended user.” This is an excellent goal, and transparency is essential for building trust in users and allowing genuinely informed consent. Bias in AI/ML algorithms is well documented, including in this very blog. It presents an especially serious risk in a medical context. High profile disasters at Boeing and Practice Fusion illustrate how essential user understanding of automation is. Concerns about overloading patients and providers at the point of care are valid. Work on Explainable AI and similar initiatives shows how white boxes can be minimally intrusive in their visibility, while still offering transparency. At the very least, vendors who maintain black box algorithms will need to prepare themselves for oversight.

This guidance is a good vision for what a CDS and AI/ML regulatory framework could look like, but both vendors and healthcare providers need to help it become more concrete.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Content

HIMSS24: Back to Form but Haunted by Change Healthcare

HIMSS24: Back to Form but Haunted by Change Healthcare

Good luck trying to get noticed for anything other than AI or cybersecurity HIMSS24 was the first HIMSS national conference that I will have missed since I first attended in 2012. It felt weird not to be there with all my friends and colleagues, and I certainly missed...

read more
ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

Workforce / capacity issues and AI – and where the two meet – are still the two biggest topics on clinical executives’ minds right now at both ViVE 2024 and HAS24. Probably the first time I’ve seen the same primary focus two years in a row – historically we’ve always seen a new buzzword / hype topic every year…

read more
Powered By MemberPress WooCommerce Plus Integration