The results of a six-month study on the impact of connected devices on chronically ill patients are in – and they’re not good.
The Scripps Translational Science Institute found that in a randomized control trial (RCT) comparing chronic care using connected devices against standard disease management models, the devices had no discernible impact on healthcare utilization, and little impact on health self-management.
Brian at Mobihealthnews has compiled a nice summary of the reaction from industry leaders on Twitter – equal parts defensive, dismissive, and even some smug satisfaction given today’s hype around these devices. Several pundits, including study author Dr. Eric Topol himself, have offered up insights (and excuses) as to what’s happening here: the study wasn’t long enough, data are inconclusive, technology has evolved since 2012, etc.
We’ll offer another explanation.
Simply put, what works for clinical whitecoats and industry whitepapers won’t always work amidst the vivid colors of the real world. Digital Hypemen take note: Just strapping technology onto a patient will not make them healthier. Our new connected health report finds there have been several successful studies (see the infographic to the right)– but few successfully scaled programs. The single most salient insight for me personally was that vendors and their customers play hot potato when it comes to taking a leadership role in onboarding patients and customizing remote patient monitoring (RPM) “kits” to fit into patients’ lives. The dismal outcomes of the Scripps study is a confirmation that this may be the lynchpin to successful deployment of remote patient monitoring programs moving forward. We looked at the Scripps study in more detail and uncovered a few specific areas where the study design came up short.
The Limitations of Claims Data
Claims data reminds us of the old joke about the man on the street looking for his keys under a streetlamp. A second person passing by offers assistance, asking if he’s dropped them nearby. The first man responds that no, he didn’t – but it’s well-lit on this side of the street so he’s looking for them here.
People choose to see a doctor based on a multitude of reasons, many of which have nothing to do with their health status: time, schedules, costs, locations, and a growing list of convenient alternatives. How many of us simply ask a doctor in the family instead (or these days, the IT geek or insurance wonk)? Even if we put aside about the growing financial disincentive to use health insurance, we’ve moved into an era of invisible, ubiquitous utilization, of Minute Clinics, telehealth, YouTube, WebMD, and Dr. Oz. Health, illness, and the rest of life happens outside of the facility –isn’t that the whole point of remote patient monitoring? That researchers limited themselves to measuring only in-facility utilization is a head-scratching oversight.
What was the impact on study participants’ grocery store purchases, or what they bought for lunch at work? On what else they bought at the pharmacy when they renewed their prescriptions? On visits to the gym? On their behavior on a rainy day? On what they searched or bought online, or shared on social media? In 2016, researchers deserve a SMAC the next time they propose studying patient engagement using claims data.
Ignoring Patient Workflow
Participants were provided study phones, rather than allowed to use their own phones. A smartphone is not the same as YOUR smartphone. Our phones are an extension of ourselves, inextricably linked with our behavior, moods, and decisions for hours and hours every day. The baseline for impact on “real” patient behavior is already off. If you were in a study and you wanted to Google something health-related, would you use the phone they gave you, or your own phone? Were disease management staff contacting people on their study phone, or on their regular mobile? While this may be a small issue considering what the study was assessing, to us it seems a missed opportunity to understand real world behavior. We’re thrilled to see companies like Sherbit partnering with Harvard Medical School researchers to explore how real world smartphone data can play a role in improving chronic disease care outcomes.
Requiring patients to set up and log into a third party portal (QualComm’s Healthy Circles) on a third-party phone was another left turn. Remote monitoring is not about the devices– it’s about stitching them seamlessly into the fabric of patients’ everyday lives. Put differently, this is a workflow problem exacerbated by software. We suspect patients had two (or more) additional portals to the one designed for the study – one from their insurer, and one from their doctor. In the context of a digital health study, it’s safe to say the petri dish was probably contaminated. It’s only the occasional vendor, like DatStat, who’s trying to bring a seamless, mobile approach to patient data collection from the world of research into clinical care.
Pushing educational content to patients through a designated online portal is a mediocre, outdated approach. A better move might have been to obtain consent to track their smartphone or web searches that contained certain keywords (e.g. diet, sleep, carbs, salt, blood pressure, etc.) and combining this with automated push of relevant content. This hypothetical opportunity is the type of unique advantage that researchers should aim to leverage in studies like this one. When will vendors and doctors start meeting patients where we are (location, but also era)? We’re hoping Apple’s current hiring spree bodes well for these challenges.
Completely Missing the Point of Sharing Data with Patients
If we want to educate and activate patients, why do data still look like this?
This is an old soapbox for us at Chilmark Research; I’ve complained about my useless lab test results at length before. To his credit, Dr. Topol has acknowledged that data visualization could have been better. But over the course of the last five or six years, we are frustrated to see that the typical clinical portal – EMR, Remote Monitoring, Care Management, or other – has hardly changed how it presents data. We must stop throwing low-value data in patients’ faces without making them easy to understand – and medical decorum be damned, even stimulating and fun. The approach taken by industry, and by Scripps researchers, is tantamount to leaving the patient out of the equation rather than bringing them on board the care team. Worst of all – this is the lowest hanging fruit. There are high school web developers who can juice up these visuals to make them slick and compelling, and college pre-meds who can translate these graphs into English. We simply must get better.
For this research, the failure to get this small, simple piece right may have been the study’s biggest flaw. Of course, we openly acknowledge the whole point of research – to learn, and nudge the industry forward step by step, making one mistake at a time in order to improve. This was a groundbreaking study if for no other reason than it pointed out that we have a lot of work left to do.
But when are we going to do it? As long as we insist on measuring data without leveraging its potential to augment patient behavior – and monitor patients without engaging them – we’re buzzing about wireless with one hand wired behind our back.
Editor’s Note: Naveen makes some excellent points but one other I would add is how does the use of RPM enhance patient satisfaction? This is one area where the use of RPM has shown some positive results in past studies. Rather than going to a doctor’s office to have their biometrics recorded, patients do it from home. For many, this is a huge benefit that is often overlooked. -J. Moore