Revisiting the Four Fallacies of AI: Debunking AI Misconceptions Can Help Us ‘Do’ AI Better in Healthcare

by | Aug 14, 2023

Key Takeaways

The much-noted hype surrounding AI can obscure the realities of why AI works, or not, and what types of problems are most likely to be successfully addressed with AI tools. Addressing some of the myths in “AI talk” is important to thinking critically and understanding the limitations and powers of our growing number of AI tools. Healthcare executives should familiarize themselves with some of the critical readings on AI to optimize their investments.

The use of the word “intelligence” sometimes gets in the way of understanding how an AI model works. Those hyping AI often utilize very superficial understandings of human intelligence, theories of mind, etc. that fail to accurately describe what an AI model does and can do vs. things the field may never be able to accomplish. Even common sense is a very difficult challenge for a computer to re-create or model.

We will need to be cautious about what AI can accomplish in terms of understanding language. Conversational agents that are used beyond narrow, deterministic tasks may encounter limits if a human’s understanding of context and linguistic complexity is required. Some clinical encounters may be challenging for some AI agents; we need to do more research into the contexts of conversational agents and their sensitivity/specificity in different contexts.

Introduction

Listening to Sam Altman and a number of other AI company CEOs speak about the future of AI can be a bit bewildering at times, given the long history of philosophers, cognitive scientists, computer scientists, and social scientists who have shed light on the ways that some of the hype around AI relies heavily on anthropomorphic and flawed understandings of intelligence. If we are to deploy AI in healthcare effectively for the many challenges we face, we need to not only address issues ranging from data governance and quality to bias, but also pay attention to how we talk about AI as well. Falling for flawed understandings of intelligence and the tools in place carries the risk of undermining trust and wasting investment dollars.

The Four Fallacies of AI

One of the most influential thinkers in the AI space is Melanie Mitchell, who published an important article in 2020 that identifies four major myths of AI, “Why AI is Harder Than We Think.” In this article, Mitchell critically examines some of the predictions made by AI scientists over the years and why they have fallen short. The insights are boiled down to four primary problems.

“The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a headline in The Guardian predicted that “From 2020 you will become a permanent backseat driver”. In 2016 Business Insider assured us that “10 million self-driving cars will be on the road by 2020”. Tesla Motors CEO Elon Musk promised in 2019 that “A year from now, we’ll have over a million cars with full self-driving, software… everything”. And 2020 was the target announced by several automobile companies to bring self-driving cars to market.”

So, why such a poor track record in predicting AI’s future(s)? Let’s begin with some myth busting.

Myth 1: Narrow intelligence is on a continuum with general intelligence

This myth or fallacy goes back to the issue of commonsense. Our commonsense requires a great deal of lived experience, innate knowledge about causality and context. This challenge is difficult to solve even with massive amounts of data. Commonsense is much more complex than many artificial general intelligence (AGI) proponents think. It is also a component of medical decision-making or the tacit knowledge that clinicians use. Evidence-based medicine is often not as clear cut as people think. Responses to a therapeutic intervention can assume the shape of a bell curve and without access to lots of granular clinical data (which is getting easier due to some AI applications), physicians will rely on their years of experience and judgement.

Myth 2: Easy things are easy, hard things are hard

Many things humans do with little thought are incredibly difficult for machines. Look at how long it has taken to get robots to convincingly replicate human movements. They are certainly getting better, and surgical robots have come a long way. But they are focused on very discrete tasks. Programming a machine to do what we do unconsciously is nearly impossible. Machines cannot be programmed to replicate this aspect of human intelligence.

Myth 3: The lure of wishful mnemonics

We often use terms associated with human intelligence when talking about machines, but they may not be analogous. “Learning” that a human does is very different from “deep learning” used computationally. If we look at the recent hype with generative AI and the tendency to use terms like “hallucinations” (i.e., errors, misinformation, etc.) and “dementia” (when a model evolves to ‘forget’ the original training data) these phenomena are ontologically different from human experience and very unhelpful for understanding how machines work. Anthropomorphizing AI is increasingly viewed as unethical.

Myth 4: Intelligence is all in the brain

This is perhaps one of the most important fallacies. Assuming we just need to computationally scale up models used by machines to match the brain is wrong. Human cognition is linked to many attributes including emotions, theories of mind, and notions of the self. Much of our knowledge comes from beyond the body and extends into the social world that provides the context of learning, forms of embodiment, and so on.

Another important thinker in the vein of Mitchell is Erik Larson whose “The Myth of Artificial Intelligence” offers an even wider ranging critique of the notion that AI can match human intelligence. He takes aim at Ray Kurzweil, Nick Bostrom, Elon Musk and is relevant to the narrative that Sam Altman has been selling. The main critique begins with the idea of inference. After going through the differences between inductive and deductive inference he demonstrates how abductive inference, or inference to the best possible explanation is nowhere close to being reached computationally and is critical to matching human level intelligence. We generate hypotheses to explain facts and the range of potential hypotheses can be nearly infinite.

The second part of Larson’s critique concerns the nature of language. Comprehension of language requires both semantics and syntactics. There are linguistic acts or the performative dimensions of language that humans can understand readily are not easily rendered into computational formats. It is very difficult to render the knowledge humans use linguistically to comprehend language into a computational format. Large language models (LLMs) may have billions of parameters to predict the most plausible next piece of text but understanding that text and context is another thing.

Conclusion: Healthcare and AI’s Myths

I have been speaking and writing about some of the insights on AI that Daron Acemoglu and Simon Johnson share in their recent book “Power and Progress”. One of the key points that rings true to me about healthcare is the issue of where to deploy automation. They note the distinction between “so-so automation” and good automation boils down to a rather simple dictum. “So-so automation” is when we deploy mediocre or poor AI tools for tasks that humans are good at. In contrast, successful automation utilizes robust AI for tasks where humans are weak. The ethos of “let’s automate everything” and make a digital workforce at scale may jump the shark on AI in many healthcare contexts.

There is a reason that so few AI algorithms have been replicated and we see many examples of AI snake oil in the marketplace. One reason is the tendency to jump on the bandwagon of the latest fad and append AI to the product. Others fail due to insufficient or bad data inputs. However, others that are often more difficult to sort out are linked to the ways that human intelligence is misunderstood by data scientists and programmers.

I have sat through numerous discussions with ML experts over the years who are drinking the Ray Kurzweil singularity Kool-Aid without the critical thinking needed to truly understand the nature of the problems that ML tools can address and those that these tools are not well suited to solve. AI is not going to solve all of our problems, nor will it likely pose an existential risk. People developing tools with flawed understandings of the problems they want to solve, that are against safety rails and ethical development of technologies will likely (and already do) harm some members of society. In healthcare we need a healthy debate about the fallacies and myths listed above to continue to innovate with AI/ML in healthcare.

 

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Content

HIMSS24: Back to Form but Haunted by Change Healthcare

HIMSS24: Back to Form but Haunted by Change Healthcare

Good luck trying to get noticed for anything other than AI or cybersecurity HIMSS24 was the first HIMSS national conference that I will have missed since I first attended in 2012. It felt weird not to be there with all my friends and colleagues, and I certainly missed...

read more
ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

Workforce / capacity issues and AI – and where the two meet – are still the two biggest topics on clinical executives’ minds right now at both ViVE 2024 and HAS24. Probably the first time I’ve seen the same primary focus two years in a row – historically we’ve always seen a new buzzword / hype topic every year…

read more
Powered By MemberPress WooCommerce Plus Integration