Blog|Articles|January 16, 2026

AI: The game changer – Let the user beware

Author(s)Neil Baum, MD
Fact checked by: Keith A. Reynolds

AI in health care poses risks, including misinformation and bias. Physicians must verify AI responses to ensure patient safety and accurate care.

I have previously written about the perils of AI hallucinations in the health care setting. I’d like to share an example of AI and the importance of verifying ChatGPT responses.

A friend, the customer service expert and a New York Times bestselling author, Shep Hyken, was in attendance at a meeting in Malaysia and one of the speakers asked ChatGPT the following question: “I just bought a pair of shoes from Amazon, and they messed up the order. The right-foot shoe was for the left foot, and the left-foot shoe was for my right foot. What should I do about this mistake?”

He then shared the response with Amazon to let them know about the mistake, and they will refund the purchase price or exchange the shoes.

Here is the response from Amazon.com:

“That’s definitely a strange mix-up – and something Amazon should fix quickly. Here’s what to do”:

  1. Start the Return/Replacement Process: Go to Your Orders on Amazon. Find the shoe order and click “Return or Replace Items.” Choose the reason: “Wrong item was sent” or “Item defective or not as described.” Amazon will usually offer a free return label and a replacement option.
  2. Describe the Issue Clearly: In the notes, explain exactly what happened–something like: “The right shoe is shaped for the left foot and vice versa. The pair I received cannot be worn properly.” That level of clarity helps Amazon’s quality control team flag the issue for the seller or manufacturer.
  3. Choose the Replacement (if you still want the shoes): If you like the shoes and they’re still available, request a replacement instead of a refund. Amazon will often ship it out before your return even arrives.

Hyken took this ridiculous response a step further and wrote ChatGPT: “Maybe there’s a better response. It’s not really a mistake. … Just put the shoes on the opposite feet and they will work just fine. Thoughts?”

As expected, ChatGPT had an immediate, automatic reply: “Ha! That’s a funny way to look at it, but unfortunately, that’s not quite how shoes work. Even if you can technically put them on opposite feet, they’ll be uncomfortable and could mess up your posture or cause pain after a while – especially if you wear them for long periods.”

AI is powerful, but just like doctors, it can make mistakes. This is why some doctors fear using AI when communicating with patients. If you use AI tools like ChatGPT, Claude, or others, you must have a way to review answers before hitting the send button. Apparently, common sense isn’t something that can be programmed. At least, not yet!

The risks associated AI in medicine and health care include the potential for AI errors to cause patient harm, issues with data privacy and security and the use of AI in ways that will accentuate health inequalities by either incorporating existing human biases and patterns of discrimination into algorithms.

An example of harm exacerbated by AI was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, leading to the undertreatment of their hypoxia. Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned. It has also been shown that certain populations who are subject to discrimination are under-represented and biased appear when using AI solutions and may thus be denied the full benefits of AI in health care.

Bottom Line: Physicians must verify ChatGPT's medical responses because hallucinations are responsible for making up false information, i.e., disinformation, lack up-to-date data, provide biased or incomplete answers, and don't cite reliable sources, resulting in errors that could harm patients, making physician oversight crucial for patient safety and accurate care.

Last take-home message: before you trust, confirm you must.

Neil Baum, M.D., is a professor of clinical urology at Tulane University in New Orleans. Baum is the author of several books, including the best-selling bookMarketing Your Medical Practice-Ethically, Effectively, and Economically,which has sold over 225,000 copies and has been translated into Spanish.

Newsletter

Optimize your practice with the Physicians Practice newsletter, offering management pearls, leadership tips, and business strategies tailored for practice administrators and physicians of any specialty.