Over the course of the last 18 months artificial intelligence (AI) has matured to the point where there are several viable vendor options for nearly every use case.
AI dominated every aspect of the annual gathering of the Radiological Society of North America (RSNA18) in Chicago. Self-described ‘machine learning’ vendors with a presence on the conference floor more than doubled from 49 in 2017 to over 100 in 2018, 25 of which were first-time presenters.
I moderated a panel hosted by Life Image on practical uses cases of imaging AI and was blown away by the conversation that ensued, particularly what I learned about how the veteran radiologists feel about being “replaced.” During the question period, a senior radiologist approached the microphone to address a comment made by a more junior radiologist on the panel which he interpreted to be too pessimistic about the potential for AI. To paraphrase the elder, “Listen here sonny, you are too young to fully appreciate what you don’t know, and you don’t know how many mistakes you are truly making on a day to day basis. 1-2 percent error rate due to fatigue alone. WE NEED AI to save us from ourselves.”
Not all old school radiologists are so optimistic: “When you’re going up the ride, you get excited,” noted University of Chicago radiologist Paul Chang said during his workshop on AI. “But then right at the top, before you are about to go down, you have that moment of clarity—‘What am I getting myself into?’—and that’s where we are now. We are upon that crest of magical hype and we are about to get the trench of disillusionment… It is worth the rollercoaster of hype. But I’m here to tell you that it’s going to take longer than you think.”
Last year, the major cloud vendors each had a significant footprint at RSNA, but this year the two largest, Amazon and Microsoft, were nowhere to be found. Only Google Cloud had a significant, if smaller than last year’s, presence. Donny Cheung, one of the Google Cloud team leaders, was on the panel I moderated and his message to the imaging community could be boiled down to two words: storage and compute. No dashboards or toolkits or tensorflowing, just storage and compute, a smart and refreshing strategy amidst the obvious feature creep many other vendors suffer from.
Over the course of the last 18 months artificial intelligence has matured to the point where there are several viable vendor options for nearly every use case.
While it was surprising that Amazon had no noticeable presence, it was even more surprising to find Facebook making news on the conference floor. Facebook AI Research (FAIR) has partnered with the Center for Advanced Imaging Innovation and Research (CAI2R) in the Department of Radiology at NYU School of Medicine and NYU Langone Health to release the fastMRI, an open source dataset for training and testing machine learning algorithms to reconstruct MRI images.
This offering is roughly equivalent to similar X-Ray and CT datasets released by NIH. Given that algorithms ALWAYS significantly outperform on all metrics against the data used to train them versus new data, the industry needs independent validation of AI claims so it is unlikely that Facebook moves the needle with this offering.
PACS vendors want to get in on the AI action by positioning their existing products as AI marketplaces or platforms (Philips HealthSuite Insights, PureWeb, LifeImage, GE Edison, FujiFilm REiLI, Nuance AI Marketplace, Blackford Analysis). Nuance has shown there is a viable market for these platforms, counting 40 startups and health systems among user groups for its marketplace. There is no shortage of startups taking this approach (MDW, Envoy.ai, Medimsight, Lify, Fovia). Imaging hardware vendors refused to be left out too, with many partnering with AI vendors to embed their algorithms on the “edge.”
International AI startups, particularly from Israel, China, and South Korea, stood out from the crowd in terms of their approach to product design, but only the companies from Israel have been able to break into the US market so far. One Korean company voiced frustration with the FDA, saying it couldn’t understand what was wrong with their application. I wonder if it underestimates the importance of using data from US patients to validate their algorithms?
Not everything we learned about AI at RSNA was positive. A paper presented at the conference showed that neural networks could be used to insert malignant features into mammograms giving a false positives, and then reverse the alterations without detection. Even scarier, it took about 680 images to train the algorithm that executed the adversarial attack. Cyberattacks have been increasing in healthcare over the last couple years, but mostly just for taking data hostage and demanding ransom to get it unencrypted. This type of attack would represent a frightening new paradigm in cyber-vulnerability, and it is certainly not difficult to imagine ways this could be exploited to make money. It could be used for a different sort of ransom, with every image appearing to show cancer until a ransom is paid and the adversarial attack is reversed. Another conceivable way this type of attack could be exploited would be falsifying data for clinical trials.
Matt Guldin · 2 years ago
Liz Gavriel · 4 years ago
John Moore · 2 months ago
Brian Edwards · 2 months ago
Brian Murphy · 2 weeks ago
Unlocking Healthcare’s Big Data with NLP-powered Ambient and Augmented Intelligence
It wouldn’t be a radical statement to say NLP bridges the human-computer divide more than many technologies. ROI has been elusive, leaving prospective adopters reluctant to embrace it despite the numerous opportunities for NLP-driven solutions. NLP technologies have reached an inflection point with the emergence of advanced deep machine learning methods that are on-par with humans for an ever-increasing list of core natural language skills, such as speech recognition and responding to questions. In our newest report, Natural Language Processing: Enabling the Potential of a Digital Healthcare Era, we profile 12 vendors, all with a track record in text mining and speech recognition, including 3M, Artificial Intelligence in Medicine (Inspirata), Clinithink, Digital Reasoning Systems, Health Catalyst, Health Fidelity, IBM Watson Health, Linguamatics, M*Modal, Nuance, Optum and SyTrue. Each has a reputation for delivering solutions that serve a particular set of use cases or customer groups, distinctions we capture using heat maps for each company.
NLP is particularly well suited to address two huge problems in healthcare – easing the clinical documentation burden for clinicians and unlocking insights from unstructured data in EHRs. Documentation consumes an ever-increasing portion of clinician’s time. Recent research has shown physicians spend as much as half of their work day (6 hours of a 12 hour shift) in the EMR. Another recent study showed clinicians spend two hours on clinical documentation for each hour spent face-to-face with patients. Unsurprisingly it is often cited as a key factor contributing to physician burnout. Ambient Intelligence refers to passive digital environments that are sensitive to the presence of people, aware context-aware, and adaptive to the needs/routines of each end user. The familiar virtual personal assistants (VPAs), such as Amazon’s Alexa and Google’s Assistant, are familiar examples.
Speech recognition technology is approaching 99-percent accuracy, a milestone that some argue means that voice will become the primary way we interface with technology. I am skeptical of this prediction, at least when it comes to the broader utility of voice-based interfaces for consumers. The visual display, with its links and rich media, is an indispensable element of the modern digital experience.
Smart speakers, the input device for speech recognition, are the hottest technology trend of the moment, with an adoption curve that exceeds even the smartphone (see graphic below from Activate). We expect the smart speaker to rapidly become a fixture in both the home and office setting, following a similar path to maturity as the smartphone, offering applications for consumers and enterprises.
Interest and adoption in healthcare is already apparent. In September Nuance announced a smart speaker virtual assistant that uses conversational cloud-based AI (Microsoft Azure) to engage physicians during clinical documentation. In late November a post on the Google Research Blog described internal research and a pilot at Stanford investigating the potential to use a similar smart speaker interface and Automatic Speech Recognition (ASR) technology to create a virtual scribe.
Startups are taking on this problem too. Saykara, led by former executives at Nuance and Amazon, is developing a virtual assistant similar to Google’s. The company claims to have far more advanced speech recognition technology than its heavyweight competitors. Other are developing ambient scribes to passively document patient encounters, including Suki.ai , Robin Healthcare, and Notable Health.
EHR vendors are also making investments in ambient intelligence. Epic has partnered with Nuance and M*Modal to embed their ambient scribe technology directly into clinical workflows. Allscripts and athenahealth have partnered with startup NoteSwift. eClinicalWorks has launched a virtual assistant called Eva. Eva operates is initially intended to respond to queries for things like recent lab data or past clinical note content.
Barriers remain on the road to ubiquitous adoption of NLP technology by healthcare enterprises. NLP provides HCOs a low-risk opportunity to experiment with advanced machine learning and deep learning technologies, but its not the type of technology that can be implemented optimally by just any analyst in the IT department, but instead requires specialized expertise that is in short supply. While free text and mouse clicks will dominate the clinical documentation landscape in the near-term, healthcare enterprises will soon expect their users to talk their applications.