
Barriers to using ChatGPT in healthcare
Physicians will only be able to trust artificial intelligence when it's transparent.
As ChatGPT dominates discussions about potential of artificial intelligence (AI) to disrupt entire industries, an Australian doctor nervously recounted how the model supposedly “diagnosed his patient in seconds,” the Daily Mail 
The chatbot’s impressive success does, nevertheless, raise questions about AI’s involvement in health care.
Can a ChatGPT-esque model be used to the benefit of physicians and patients?
In a sense, yes. ChatGPT’s two most pronounced breakthroughs in relation to health care will be in:
- Disrupting the way we access knowledge. ChatGPT will become the one-stop-shop physicians will use to efficiently seek answers to questions for which they would otherwise need to search Google or other curated knowledge websites.
 - Its phenomenal fluency and competent prose that can help efficiently communicate any thought to any audience – whether patients, insurance companies, or fellow colleagues.
 
Social media is exploding with tips by physicians on how to use ChatGTP. Some examples include sending prescription instructions to patients, tapering down medication instructions, constructing a letter to insurance companies requesting approval for a medication or procedure, and writing the initial outline and abstract of a scientific paper. The list goes on, and we are only at the beginning of this historic pivot.
Where does it still fall short?
The real question is whether ChatGPT can think clinically about a patient. Can OpenAI’s model perform clinical reasoning in an evidence-based manner that can assist physicians in decision making?
That’s where it gets tricky.
One of the biggest hurdles ChatGPT faces – in health care and other sectors – is that it is built with the “
It’s not that ChatGPT or black-box AI more broadly were created with the purpose of deliberately creating mystery surrounding their decisions. Rather, it’s an implication of the methods through which the software is developed. Many black-box methods of creating the health care-geared AI models that power chatbots and clinical-intake tools produce their output by comparing each specific case with the countless patient records in their databases. In doing so, they in effect base their algorithmic decisions on big data, making it impossible to reason through or reference their decisions to a specific medical source.
We have all become accustomed to a hit-or-miss AI that produces output almost magically. It gets some things right and others wrong, but never explains its reasoning or references back to its sources.
Building effective explainable AI
What will it take for physicians to gain trust and adopt an AI based technology into their practices? Building explainable AI (XAI) starts with the data on which we train our models. Companies need to have transparency and explainability in mind early in their journeys to start off with the appropriate data – data on which the users of the intended software understand and already rely.
In the case of health care software that is aimed to work side-by-side with providers, that means data from peer-reviewed, high-quality medical literature. Standardized care based on reliable evidence is the key to high-quality care. AI systems built on that principle rely not only on the quantity of data they use, but also on the ability of the models to understand the content of these sources and apply it intelligently where needed in real time.
There are several ways in which XAI could benefit physicians when it comes to clinical reasoning tools:
- Improved trust and confidence: By providing physicians with insights that are fully explainable and referenced to the same trusted sources they use, XAI can help to build trust and confidence among physicians. This can make physicians more likely to use these tools and can help to ensure that they are used effectively.
 - Reduced bias and standardized care: Every physician is limited by their own bias and blind spots. By providing physicians with a trusted tool that consults with all the relevant medical literature, XAI covers physicians’ blind spots and ensures a basic standardization of care for all patients.
 - Improved efficiency: By providing a clearer understanding of how it arrives at its output and earning physicians’ trust, XAI expedites patient visits, leaving more time for building treatment plans and relieving bottlenecks.
 
When built in a transparent and explainable way, AI offers tremendous potential for improving the clinical-reasoning process, ensuring high-quality care while also making physicians’ lives easier. The black-box approach hinders the ability to develop models that win physicians’ trust, and for good reason. It’s time to steer the AI ship in the explainable direction. Until then, ChatGPT-like tools will be used mostly for the more administrative part of health care.
Michal Tzuchman-Katz, MD, is a cofounder and chief medical officer at 
Newsletter
Optimize your practice with the Physicians Practice newsletter, offering management pearls, leadership tips, and business strategies tailored for practice administrators and physicians of any specialty.














