HIMSS18 Review 1 of 3
By Ken Kleinberg, Brian Murphy, and Brian Edwards
During Eric Schmidt’s opening keynote at HIMSS18, he asserted that, given the state of algorithms today, it’s possible to take any large data set and make strong predictions – and healthcare is no exception. There’s no need for clinical content knowledge, rules, or past experience. His statements were met with plenty of skepticism – less about the capabilities of Alphabet and its algorithms, and more about the realities of gaining access to the right healthcare data sets. This is not trivial.
So who should get data from who? What about patient consent? Who can be trusted? Historically, health systems have done their own analytics and research within the boundaries of their own systems. Vendor analytic solutions were implemented on site. Even this limited scenario presented complex challenges – in particular, just bringing data together to a point where it could be analyzed. Transferability of models was difficult, and costs were not shared. The use of analytics was therefore sparse, mostly limited to research and quality improvement.
Bulk data access will be critical for the industry to move beyond the current artisanal methods of building and maintaining data stores for analytics purposes.
Slowly but surely, the situation is changing. The cloud is becoming much more accepted, with many options possible (private, public, hybrid). Algorithms, enabled by rapidly advancing hardware/computing power, are capable of dealing with much larger and more complex data sets. Data operating system approaches can stream data in a liquid fashion from multiple locations/sources, reducing the need for centralized repositories.
A next step as data becomes more available is to fully utilize it. Advances in natural language processing (NLP) are able to extract/mine useful features from unstructured data such as text, faxes, and reports. Algorithms can increasingly use incomplete, messy, or ill-defined data and “fill in the blanks.” At a certain scale, data quality becomes less of a factor in conducting analytics.
Despite the black-box nature of AI systems, they can still be validated using objective methods, such as how and what they were trained upon, and how they perform in real-world clinical scenarios vs. human performance.
Whose Black Box?
There is also a lot of healthy discussion about how AI systems make decisions. The primary concerns are black-box algorithms and a lack of data transparency. This even reached a point where a major educational institution recently recommended that governments not rely on any AI or algorithmic systems for “high stakes” domains, such as healthcare technology, where the way a system makes a decision cannot be understood in terms of due process, auditing, testing, and accountability standards.
Despite the black-box nature of these AI systems, the fact is that they can still be validated using objective methods, such as how and what they were trained upon, and how they perform in real-world clinical scenarios vs. human performance. As long as they are not used in closed- loop systems, and as long as there is a human expert able to accept or dismiss their recommendations, they provide valuable input (before or after) for difficult cases (such as whether surgery or therapy is the best course of action). They may also serve as a last resort with the proper consent (as with a terminal cancer patient).
So that all is highly promising – but is healthcare ready to hand this data over?
Flat FHIR and Analytics
Percolating below the surface at HIMSS was disquiet about the “bulk data transfer” proposal. This proposed method would make large sets of data (think cohort-level data) more freely available for analytics purposes. It will allow a user or program in one organization to issue a broadcast query to the country at large and receive patient data from other organizations. For example, an ACO quality manager could issue a query to a community and get all of the relevant data for patients in the ACO.
This proposal, also known by the name “Flat FHIR,” is part of the TEFCA discussion (Trusted Exchange Framework and Common Agreement) insofar as such queries are a contemplated use case. But among people who are paying attention to this proposal, there are unanswered questions.
Open-ended queries arriving from anywhere in the healthcare system are not currently part of most HCO’s IT capacity plan:
- Will organizations end up having to add more compute and network resources to satisfy such queries?
- Does TEFCA’s requirement to provide non-discriminatory access mean that organizations will not be able to implement reasonable network traffic and quality-of-service controls?
- If patients have different consent profiles in different organizations, how should a query recipient satisfy the request?
- Will organizations have to establish revenue share agreements based on pro-rata data contributions?
- Will the fact that TEFCA puts the onus on the query receiver to reconcile medications, allergies, and problem lists mean that the receiver must verify that its data is current with proximate organizations before satisfying the original query?
Such questions represent the tip of the iceberg. As a practical matter, before the bulk data transfer proposal can ever be a day-to- day reality, many technical, non-technical, and financial questions must be resolved.
Despite these questions, bulk data access will be critical for the industry to move beyond the current artisanal methods of building and maintaining data stores for analytics purposes. After all, analytics has more to offer than the dashboards and reports that describe the recent past. HIMSS18 was less a venue to air out the challenges associated with making bulk data transfer a reality than it was an opportunity to preview some of the advanced and predictive analytics use cases it could enable.