The Way Forward for AI in a Challenging Landscape
Key Takeaways
Many healthcare executives are stating that trust is a key variable to address when it comes to adopting AI—but the meaning of trust is unclear. Conceptual clarity that distinguishes trust, trustworthiness, reliability, and confidence in a technology or company are important to developing a framework that can enable AI developers to build trust with end users.
Trust in the health system itself is problematic; emerging technologies such as AI are impacted by the lack of trust in the system, often due to the uncertainty and media attention to the failings of AI. Part of the problem AI developers face is the trust deficit in health systems and science itself. Healthcare institutions that take the trust deficit seriously and are prudent in how they deploy new technologies such as AI stand to gain competitive advantage if the technology adoption process is transparent and prudent.
Skeptics of AI companies are increasingly pointing out “fair-washing,” “audit-washing,” and other forms of virtue signaling without substance. A growing chorus in the world of healthcare is critical of ethics efforts that do not go far enough in addressing the shortcomings of models and data practices. Responsible AI in healthcare will require thorough review and development of acceptable practices for auditing.
Introduction
Ian Corbin and Joe Waters recently observed an important shift in consumer perceptions of healthcare in the US. In 1966, more than three-quarters of Americans reported having high confidence in our medical leaders. By 2018, that number had fallen to 34 percent.
One of the important shifts over the decades has been the perception of with whom the consumer interacts: in the 1960s people viewed their interactions as part of a doctor-patient relationship, whereas in recent decades, they now describe their primary interaction as with a system. In a system, care is perceived as far more depersonalized, with physicians under more pressure to respond to financial drivers.
When it comes to trust in AI, we frequently focus on the qualities of the model and the data used to build a model. Does the math work and optimize care for all cohorts? Do we understand how it works? While important, the issue of trust must also be viewed in consideration of the experience that physicians, patients, and administrators have of the overall system. We also must consider the overall erosion of trust in science that is plaguing the country currently surrounding the immunization campaign for SARS-CoV-2.
In the coming months, we will be releasing a report on AI and Trust that seeks to build a framework to understand the concept of trust; in addition, to understand how both developers of AI tools, as well as users from physicians to health systems, can engage with AI. The report will provide guidance on what constitutes responsible AI that is worthy of building trust, produces fair results, and builds confidence that a system will respond to user needs when results lead to detrimental outcomes. While we cannot solve the epidemic of mistrust in American life and healthcare, we can develop practices around algorithmic fairness and justice that do more to address health equity, improve outcomes, and reduce costs for both the system and consumers.
Defining Trust
One of the problems with trust is the nebulous nature of the concept itself.
In general, trust is a relational concept where one party trusts another to follow through on a task. It also implies the willingness and competence to complete the task at hand in a satisfactory way. Mackenzie Graham distinguishes trust from reliance in that reliance implies predictability; we can rely on the technology or agent to do what we ask of it. Trust asks more of the agent and that they have a commitment to or ability to care about the interests of the other party. Someone is trustworthy when placing trust in them is well-grounded, according to Graham.
Trustworthiness depends more heavily on the features of the object (i.e., technology) and trust is more focused on the person. Sociologists whose work has focused on risk, trust and uncertainty have also used the concept of confidence to go beyond trust. When people interact with social and technological systems, there are generalized norms that lead to predictable outcomes. We build confidence from engaging with these systems, and they respond according to the norms that we have come to expect.
Confidence and trust differ in that trust is more concerned with the internal motives and commitments of the other party. Confidence is based on the external norms and motives of a system to perform within the norms and expectations one has of it. These concepts—and the norms and expectations—then operate at the level of the patient-provider relationship, as well as one’s interactions with a health system.
And therein lies the challenge. As we engage with these concepts in the context of contemporary AI/ML discussions, trust in an algorithm for a medical decision can also be nested within one’s experiences with the overall health system—or science and medicine itself.
Building a Framework for Trust
The anxieties that AI is creating across our economies is intimately linked to the uncertainties that accompanies a predictive technology being talked about in both dystopian and utopian language from film to fiction. This often leads us astray from the actual contexts where AI is deployed in healthcare. Vendors invoke trust and symbols extracted from a multitude of principles-based AI guidelines that proliferate almost monthly from governments, the Vatican, WHO, etc., to provide evidence of the trustworthiness of their technology. But these are typically broad guardrails for conduct and shed little light on the practices of AI.
Formalized codes from the mountaintops of policy organizations, such as those mentioned previously, rarely provide the practical evidence that allows an individual to put trust in an institution or practitioner and accept the judgement of an algorithm. What recourse does one have to question how the algorithm worked and whether it is appropriate for their context? What recourse do they have when things go wrong? There is a politics of trust involved as well.
Ethics bodies have a mixed record of addressing these issues. We went through similar debates in the 1990s with genomics, with the secular priesthood of bioethicists weighing in on what was ethical or not. In the present we are faced with perhaps greater fear of Big Tech’s use of ethics as a marketing tool that is greeted with a tremendous amount of cynicism and cries of “ethics-washing,” “fair-washing,” and other crimes of PR and marketing standing in place of real action to address the criticisms of these industrial applications.
What the healthcare industry can do is work to build frameworks and standards for what constitutes adequate auditing of algorithms for bias, data governance, explainable AI, and auditing of fairness (how equitable the outcomes are for different cohorts). Fixing America’s trust in the healthcare system may be too much to bite off; however, building institutions that can evaluate and assess the components that make up what is increasingly called “responsible AI” would be a start.
Conclusion
Trust will never come solely from industry branding itself ethical or trustworthy. Public engagement with patient groups, citizens, and regulators is crucial to learning what concerns they have and co-creating the AI of the future. This is a process that will take time, but has the potential to produce better AI, better data governance processes, and begin to build trust in a system that requires trust to function in our post-pandemic world.
In the coming months, we look forward to speaking with a diverse group of informants about this challenge as we build our own thinking on this topic. If you have ideas you would like to share your thoughts on regarding trust, ethics, and responsible AI, please feel free to reach out to me directly at jody (at) chilmarkresearch.com.