Our latest report, AI and Trust in Healthcare, is available now! Featuring an examination of an increasingly important issue in healthcare, this report explores some of the emergent problems with implementing AI, relating to creating trust and eliminating bias.
Senior Analyst Jody Ranck dropped by to give us some perspective on the report and what it entails. Check out the Chilmark Research YouTube channel for more videos on the future of healthcare tech!
View the video and the transcript below:
Jody Ranck: [00:00:00] Welcome to the Chilmark Channel. I’m Jody Ranck. I’m a senior analyst with Chilmark Research. And today I’m going to talk about our upcoming report on trust and AI in health care. But before we begin, make sure you follow our channel. Give us a like, leave a comment and don’t forget the notification bell.
Jody Ranck: [00:00:22] Trust has become an important form of social capital in the technology marketplace, particularly in health care. As we’ve seen with COVID and immunizations, lack of trust in new technology can significantly hinder their success. So at Chilmark, we’ve decided to do a deep dive into the basis of trust for A.I. and the last several years, we’ve seen a number of problematic models enter into the marketplace, some that had racial bias or some that recommended the wrong medications and so forth. All of this can damage the reputation or trust and new technologies such as A.I.
Jody Ranck: [00:01:05] So in our report, we are going to cover what are a number of things that developers of AI tools and services can address, from things like implicit bias or racial bias in algorithms, to how do you explain how this model works to both a payer or a provider or a patient? These are things that the individual organizations can do. But the biggest part of our report is looking at what the industry itself needs to do.
Jody Ranck: [00:01:35] This goes beyond any single company, and we talk about things such as this intangible economy, investing in institutions and and practices that can help enable better products and services to reach the market. So the reason why we’re focusing on the industry collaborations in this intangible economy is: we see the FDA lacks the budget to address many of the emerging ethical and patient safety issues that matter most.
Jody Ranck: [00:02:09] So in our report, we cover a number of things that the industry will need to do together to ensure better AI products reach the market without the risk of harming patients. And how do we create better transparency so that people understand where the the data come from that underlie this model, what certifications we have for explainability, bias mitigation, and other forms of risk, as well as the impact on overall health equity. Health equity has become a major goal of the system in recent years, and AI has the ability to undermine that if done improperly. So we hope this report can help stimulate a much wider debate about AI and the marketplace and things that we need to do together to address some of the risks of AI in our present health care space.
Jody Ranck: [00:03:08] So one example of an area that we cover in the report around bias, for example, is the issue of dimensionality of data. Digital health data is very complex, and we’re often seeing that once an AI model is introduced into the market, its performance begins to decrease and it actually can do harm. This is due to what’s called the curse of dimensionality. So when you’re training a model on a training data set, there can be blind spots in that data. And then when you move, you begin to introduce that model into the market to a bigger population. That blind spot can increase exponentially.
Jody Ranck: [00:03:50] So we look at how developers of AI algorithms can address this curse of dimensionality and hopefully improve performance over time. This is a major challenge with the current state of AI in health care to stay up to date to the status of the report. Subscribe to our newsletter and the link is in the description down below if you’re new or lurking.
Jody Ranck: [00:04:15] Welcome to the Chilmark family. If you have any comments or questions, leave them below and we will help you out. Thank you for watching.