AI and Trust: The Tipping Point

by | Mar 13, 2023

Podcast cover with title "AI and Trust: The Tipping Point"

Latest podcast episode explores frameworks to develop trust in new algorithmic tools

Artificial intelligence applications have seemingly been near the point of widespread implementation for many years, across many industries including healthcare. But as emerging technologies like ChatGPT begin to dominate the public discourse, parallel healthcare applications are still suffering both in reputation and implementation.

As public confidence in the healthcare system continues to decline, it will be difficult to implement any AI applications effectively before a foundation of trust is built between patients and health systems. How do we get to a place where we once again trust our care providers and, by extension, the technologies they deploy?

Dr. Jody Ranck’s recent Market Scan Report “AI and Trust in Healthcare” for Chilmark Research looks at these issues and more, which he discusses with Chilmark Analyst Elena Iakovleva in the latest episode of ChilCast: Healthcare Tech Talks.

Elena Iakovleva: [00:00:10] Hello all Chilmark friends and ChilCast listeners. My name is Elena Iakovleva and I’m a research analyst with Chilmark Research. This is our first ChilCast in 2023, and I just want to wish you all happy and prosperous New Year. Today I am meeting with Dr. Jody Ring, who is our senior analyst, and the subject for this meeting is going to be his AI and Trust report which made it in Health IT News’ top ten stories in 2022. Congratulations. Huge achievement, and I am just eager to learn more about that.

Jody Ranck: [00:00:51] Thanks. I’m excited to talk about it with you. Wonderful.

Elena Iakovleva: [00:00:55] So let’s begin with why trust now? And where are we with AI in health care?

Jody Ranck: [00:01:02] So one of the reasons we decided to do this report–if we just look at AI alone–well, let’s stand back and look at health care in general and trust. And as we’re moving to a more distributed health care system, that means trust is also distributed as we adopt more digital health technologies and so forth. And then just in society in general, we’ve had this transition over the last several decades where people used to trust the focal point of their relationship with the health care system was with their doctor. Now it’s with a system.

Jody Ranck: [00:01:41] So trust is more diffuse, and that’s becoming more distributed because of the nature of the technology. And then when we get to AI itself, we have the issue of just growing awareness of what AI is or isn’t in some cases, and worries about the potential harm that AI can cause and or the risks when algorithms make decisions for us, and where’s the human oversight and so forth. And then we’ve had a number of missteps with algorithms that were have been used to refer people to disease management programs or sepsis detection and so forth. And it turned out they either didn’t work very well once taken to a greater scale, or there are things like racial and gender bias in it. And there’s a lot of that kind of historical artifacts from our data and the way clinical research has been done that gets integrated in the AI models. Then if you scale up these models, you’re scaling up the bias.

Elena Iakovleva: [00:02:45] Wonderful, Jody. Thank you. It really seems like a super hot topic right now. And could you please talk a little bit about the broader community and what has been going on around the ethics of AI to improve the situation?

Jody Ranck: [00:03:00] Sure. So over the last 2 to 3 years, maybe a little bit more than that, we’ve seen everyone from the Vatican to World Economic Forum to FDA, there are a lot of sort of proliferation of principles, ethical principles to serve as guardrails for AI. Of course, having some guardrails and broad principles, that’s a good thing. There’s nothing wrong with that. But then when you get down to the level of the mundane and everyday practice of machine learning and then implementing these and clinical decision support, we need a lot more than that. So the FDA has come up with some guidelines for their software as a medical device sort of regulatory environment.

Jody Ranck: [00:03:47] But the field itself is way ahead of the actual guidelines that are out there. So there’s an awful lot of work that needs to be done, interdisciplinary work that’s not just data scientists and machine learning and even clinicians and so forth. We need some social science input and so forth on this to get more granular approaches to governing AI and addressing emerging ethical issues as the technology develops as well. And to get out ahead of some of this stuff because there are some patient safety issues and a whole plethora of problems that could arise if we don’t do that.

Elena Iakovleva: [00:04:28] I see. And now can we talk about the components of a responsible AI framework and what is the meaning behind each of them?

Jody Ranck: [00:04:37] Yes. So you see a shift in the language from ethics of AI to responsible AI. And the last year or two, especially, where a lot of companies came out and said, you know, they’ve made some hires and created ethics groups and then continued to do the same thing. So people started talking about ethics-washing and stuff like that. So the shift to responsible AI is putting trying to put, you know, greater depth and clarity about some of these more granular things we can do to ensure better AI and safer AI. So those include, you know, the data governance and privacy and security around data that’s used in the models to validating and and the reproducibility of the underlying model itself and ensuring that there’s no drift. Or as that model gets deployed in a bigger population, it doesn’t become less accurate–human centered design. And for the models to be more human centric, accountability and fairness.

Jody Ranck: [00:05:43] And here we also get into the issue of explainability of the model and not just having a black box model that we don’t understand how it works and how it comes to a decision, but having a way to explain how this works and there are lots of issues around that as well, that we get into more in the report. There are some controversies around that and differing opinions. And then of course, auditing for bias and along multiple axes around bias that, you know, whether it’s gender, race, ethnicity and so forth, this model is going to be treat populations fairly. And then linked to that, what’s the impact on broader populations and health equity? So that’s sort of a continuum of a responsible AI that in some ways you could almost view this as the beginning of a trust value chain. The things that we have to do, and do well, to ensure that these are robust models and aren’t going to do harm.

Elena Iakovleva: [00:06:44] Okay. And can you please explain why the need for continuous evaluation and so on keeps on growing?

Jody Ranck: [00:06:52] So that the continuous evaluation is needed because of some of the aspects of creating an AI model and how as they ingest more data, they learn from that data and they evolve. And if you look at health data, and especially if we’re looking at digital health, you know, we’re getting more different types of data that we’re ingesting into the health care system that can also model developers and data scientists want to use more in different types of data to build these models, but that comes with a certain risk as well. And we refer to this as the dimensionality of data. So sometimes when you’re ingesting different types of data and using that to train a model on there can be a blind spot in that data where a particular form of bias, for example, exists. And then if you train that model on a very small population size, which is very common, if you look at a lot of the FDA approvals that to date, most of those, if there’s data on the population, they’re under a thousand patients.

Jody Ranck: [00:08:02] So let’s say you then deploy that model in a population of 100,000 patients in that population that has different features. That little blind spot in the training data can then grow exponentially and the model just won’t work and will give you erroneous feedback. And so that issue of drift and dimensionality of data and so forth, we need to continuously monitor that because, you know, AI or machine learning models are not static. They evolve. And so we have to track that evolution in how they perform. And if there are these like hidden blind spots or bias as you apply it to a new population, we have to check for that. And so in the report we go into how data scientists are going about that; some of the technological tools, some of the other tools that have been developed everywhere from Brookings to, you know, University of Chicago and Berkeley and so forth, different places that are really investigating bias in AI models very closely.

Elena Iakovleva: [00:09:02] And what about the health care industry? What are they doing in this area?

Jody Ranck: [00:09:07] So last summer, MITRE and Mayo Clinic launched the Coalition for Health AI, which is about trying, you know, a coalition that’s explicit focuses to foster more responsible AI and kind of implementation of the frameworks and validation and so forth that will put a check on models and make sure the most robust models enter the market within that. There’s you know, we’re still seeing a lot of with the government as well. Nest, for example, has been writing they’ve produced several really important reports on risk mitigation and risk management for AI that are definitely relevant to health care.

Jody Ranck: [00:09:54] But one of the areas that we really focus on in the report is, you know, we need these intra-industry collaborations. That’s sort of the foundation of what needs to happen. But we talk about investment in the intangible economy and use the example of vaccines. We rolled out vaccines. We just didn’t invest in vaccine production facilities and R&D facilities and, you know, cold chains and stuff like that. They worked on developing downstream advanced market purchase agreements and all these sort of intangibles that are not the hard infrastructure for producing vaccines is the sort of social people component. And there’s some economic work in looking at how over the last decade or two, innovation has declined because we often do not invest enough in this so-called intangible economy. And I think machine learning and AI is one of those areas where the emphasis has been on investing in these big labs, you know, expensive labs to build very large models and so forth in a in a small number of places.

Jody Ranck: [00:11:08] And so you’re getting concentration, you know, like a very large percentage of all the research in AI is funded by Google. So we need to democratize that. But then this sort of social infrastructure that evaluates and so forth, all of these models, we need a lot more investment in that. It’s not just about data scientists. We need transdisciplinary approaches. And one of the things we talk about in the report is how we could use this kind of intra-industry collaboration like Chai and connect it to the medical liability insurance world where you could label these models as they’re evaluated with like a nutrition label where, you know, create greater transparency about the data used, how the bias was checked and whether it’s replicable. The explainability all those components I talked about earlier, make that transparent and then have these sort of market mechanisms that those models that go through all of that validation exercise and have the highest results, they would get lower premiums on their liability insurance and they could have a better price point in the market by investing in all of those steps in between. So I think that’s the area where we haven’t focused as much.

Jody Ranck: [00:12:37] But to make a safe, ethical or responsible market for health care AI and to have the trust of patients and doctors, those are the types of things that are really, really badly needed. That’s the big gap in the market at the moment.

Elena Iakovleva: [00:12:53] Finally, what is trust and how will all of those measures contribute to trust and the challenge that new techs have with trust?

Jody Ranck: [00:13:02] Yeah, one of the challenges with trust is it’s been studied a lot from everything from anthropology to philosophy, but it remains a somewhat nebulous concept in practice and in general. It refers to, you know, when you have two people engaged in a transaction, like if I’m getting a service from you, for example, do I find you reliable in providing that service back to me in an effective fair way and so forth? And then there’s the issue of trustworthiness that; let’s say if things go wrong, let’s say I buy a product from someone and it doesn’t work as well as planned, well, I get stuck with the bill and a faulty product. Or will the vendor compensate me for the loss of money and so forth? I mean, those are sort of broad kinds of definitions. Use cases of trust, so to speak, that have been out there.

Jody Ranck: [00:14:02] But I think when it comes to, you know, these new technologies and so forth, it’s somewhat in flux. And as we mentioned earlier on, this whole notion of distributed trust and, you know, who do we believe in and who do we count on for health information? And as you we know, you know, COVID brought home the whole issue around misinformation, disinformation and so forth. And there’s a lot of that out there with AI as well. And with AI. The problem is more, you know, you have this hype cycle. Goal and promising the world and I is going to change everything down to the point we won’t even know how to test our kids in school yet.

Jody Ranck: [00:14:46] You know, fundamental questions around what is intelligence itself are not answered in that hype. But a lot of data scientists are very concerned about how the hype is getting ahead of what the science can do and so forth, and so that that has the potential to create a lot of distrust as we’ve seen models that go out and and actually harm people. So in some ways, we can talk about trust in terms of distrust in the discontent that happens when things don’t work and what that means to health systems. We had the example of IBM, IBM, Watson and its failure because it wasn’t a reliable platform for oncologists, and that’s a good example that brought home the issue of marketing, getting ahead of the actual science and not being able to create trust and a product failed. I mean, now there’s, you know, the next generation of that.

Jody Ranck: [00:15:44] So I think the answer is; trust is a fairly nebulous concept, and there are a lot of organizations out there that do these trust surveys and they tell us something, but they’re just a snapshot. Like I say, it’s a snapshot of one frame in a motion picture. And so I think with this whole AI world, we’re going to need to have sort of closer social science attention to what the actual users experience and how they articulate, trust and understand the ethics that the level of everyday practice to get insights of what it means and a clearer understanding of what it means for AI in health care.

Elena Iakovleva: [00:16:28] Well, I think we covered a big chunk of information on AI and trust, probably as much as we could fit into podcast format. And Jody, I just want to sincerely thank you for your time today and for your enormous effort into studying and publishing on AI and trust.

Jody Ranck: [00:16:50] Thank you for interviewing me. It’s been fun.

Elena Iakovleva: [00:16:52] All right, so Chilmark Friends and ChilCast listeners, we are coming to the end of this podcast and I just want to remind you all that this amazing report is still available for purchase and we are proud to have the best expertise in AI and trust, and we are here to address any of your needs in a trust consulting. Stay tuned and our next ChilCast will be updated soon. Have a great day.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Content

HIMSS24: Back to Form but Haunted by Change Healthcare

HIMSS24: Back to Form but Haunted by Change Healthcare

Good luck trying to get noticed for anything other than AI or cybersecurity HIMSS24 was the first HIMSS national conference that I will have missed since I first attended in 2012. It felt weird not to be there with all my friends and colleagues, and I certainly missed...

read more
ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

Workforce / capacity issues and AI – and where the two meet – are still the two biggest topics on clinical executives’ minds right now at both ViVE 2024 and HAS24. Probably the first time I’ve seen the same primary focus two years in a row – historically we’ve always seen a new buzzword / hype topic every year…

read more
Powered By MemberPress WooCommerce Plus Integration