- Natural Language Processing (NLP) is an increasingly low-cost, low-risk way for healthcare enterprises to experiment with machine learning and deep learning technologies.
- HCOs can use ambient intelligence to unlock insights from the 80% of clinical data captured in an unstructured format.
- Ambient voice technology has seen faster adoption than any other consumer technology before it, indicating potential for high rates of acceptance, utility, and efficacy in healthcare.
It wouldn’t be a radical statement to say NLP bridges the human-computer divide more than many technologies. ROI has been elusive, leaving prospective adopters reluctant to embrace it despite the numerous opportunities for NLP-driven solutions. NLP technologies have reached an inflection point with the emergence of advanced deep machine learning methods that are on-par with humans for an ever-increasing list of core natural language skills, such as speech recognition and responding to questions. In our newest report, Natural Language Processing: Enabling the Potential of a Digital Healthcare Era, we profile 12 vendors, all with a track record in text mining and speech recognition, including 3M, Artificial Intelligence in Medicine (Inspirata), Clinithink, Digital Reasoning Systems, Health Catalyst, Health Fidelity, IBM Watson Health, Linguamatics, M*Modal, Nuance, Optum and SyTrue. Each has a reputation for delivering solutions that serve a particular set of use cases or customer groups, distinctions we capture using heat maps for each company.
NLP is particularly well suited to address two huge problems in healthcare – easing the clinical documentation burden for clinicians and unlocking insights from unstructured data in EHRs. Documentation consumes an ever-increasing portion of clinician’s time. Recent research has shown physicians spend as much as half of their work day (6 hours of a 12 hour shift) in the EMR. Another recent study showed clinicians spend two hours on clinical documentation for each hour spent face-to-face with patients. Unsurprisingly it is often cited as a key factor contributing to physician burnout. Ambient Intelligence refers to passive digital environments that are sensitive to the presence of people, aware context-aware, and adaptive to the needs/routines of each end user. The familiar virtual personal assistants (VPAs), such as Amazon’s Alexa and Google’s Assistant, are familiar examples.
Speech recognition technology is approaching 99-percent accuracy, a milestone that some argue means that voice will become the primary way we interface with technology. I am skeptical of this prediction, at least when it comes to the broader utility of voice-based interfaces for consumers. The visual display, with its links and rich media, is an indispensable element of the modern digital experience.
Smart speakers, the input device for speech recognition, are the hottest technology trend of the moment, with an adoption curve that exceeds even the smartphone (see graphic below from Activate). We expect the smart speaker to rapidly become a fixture in both the home and office setting, following a similar path to maturity as the smartphone, offering applications for consumers and enterprises.
Interest and adoption in healthcare is already apparent. In September Nuance announced a smart speaker virtual assistant that uses conversational cloud-based AI (Microsoft Azure) to engage physicians during clinical documentation. In late November a post on the Google Research Blog described internal research and a pilot at Stanford investigating the potential to use a similar smart speaker interface and Automatic Speech Recognition (ASR) technology to create a virtual scribe.
Startups are taking on this problem too. Saykara, led by former executives at Nuance and Amazon, is developing a virtual assistant similar to Google’s. The company claims to have far more advanced speech recognition technology than its heavyweight competitors. Other are developing ambient scribes to passively document patient encounters, including Suki.ai , Robin Healthcare, and Notable Health.
EHR vendors are also making investments in ambient intelligence. Epic has partnered with Nuance and M*Modal to embed their ambient scribe technology directly into clinical workflows. Allscripts and athenahealth have partnered with startup NoteSwift. eClinicalWorks has launched a virtual assistant called Eva. Eva operates is initially intended to respond to queries for things like recent lab data or past clinical note content.
Barriers remain on the road to ubiquitous adoption of NLP technology by healthcare enterprises. NLP provides HCOs a low-risk opportunity to experiment with advanced machine learning and deep learning technologies, but its not the type of technology that can be implemented optimally by just any analyst in the IT department, but instead requires specialized expertise that is in short supply. While free text and mouse clicks will dominate the clinical documentation landscape in the near-term, healthcare enterprises will soon expect their users to talk their applications.