
- Physicians Practice September 2024
- Volume 2
- Issue 9
Artificial intelligence and medical professional liability: 3 Questions for decision makers
While the developers of AI-based applications charge ahead, clinicians and healthcare institutions must temper their optimism with caution.
Rapid advancements in artificial intelligence—sometimes known as augmented intelligence, to reflect that AI-powered tools supplement human intelligence, but do not displace it—have inspired many to feel cautiously optimistic about AI’s potential to assist healthcare professionals.
Administrative lifts first, diagnostic help to follow
Some healthcare institutions have been quick to implement tools that
In the more daunting arena of patient-facing care, healthcare systems are building on the success of clinical decision support in areas like
Determining responsibility—and liability
While the developers of AI-based applications charge ahead, clinicians and healthcare institutions must temper their optimism with caution—and with questions regarding who will be responsible when a patient is harmed. How can medical professionals and organizations take steps to protect patient safety? How can they shield themselves from responsibility for aspects of healthcare technology that belong with developers, not doctors?
It can take years for answers to wend their way through the courts. Yet clinicians and healthcare organizations have to make decisions now, during “
Asking three key questions can help decision makers guard their doors against risk and liability.
Question 1: Will the technology application be granted any power to take autonomous action, or will the application report a pattern or concern to a human who takes an action? If the application can take action, what are the stakes?
Self-driving cars in San Francisco have driven into construction sites, obstructed first responders, and otherwise caused headaches, if not hazards.
This experience demonstrates how AI lacks what is commonly called common sense. Therefore, when we say “artificial intelligence,” for now, we mostly mean
Yet just because
Clinicians must have sufficient bandwidth to maintain vigilance, and workflow design must support and encourage that vigilance.
Question 2: At some point, it could be considered negligence not to use certain new technologies. For the application under consideration, how does the current standard of care accommodate human intelligence vs. human intelligence augmented by technology?
Once upon a time, arthroscopic surgery was new—yet revolutionary medical techniques and their benefits can quickly become familiar, even to medical laypeople.
Already, the idea that a computer might be the first to review a medical image or scan is familiar to many. Radiology groups, hand surgeons, and others are using AI to read films and compile reports—both of which are then scrutinized and edited by the clinician. Many medical professionals and patients welcome such assistance from technology, which enhances, but does not supplant, human expertise.
Over time, decision makers may face expectations that AI’s benefits become incorporated into the standard of care. At some point, the risks of not adopting a new tool could exceed the risks from the tool, so that
Question 3: At first, it may be plaintiffs who have a harder time making their case in the courts, but that will change. How can clinicians and organizations make choices now to promote patient safety—while documenting those choices for the future?
Legal precedent has limited usefulness in predicting how AI-related medical malpractice litigation will fare in the courts, because courts have shown
AI models have statistical patterns at their heart, and to prevail in court, plaintiffs must show that relevant patterns were “defective” in ways that made their injury foreseeable. With representation of these patterns involving billions of variables, the
Meanwhile, assessing the liability risk of any specific AI implementation involves accounting for factors including the AI’s accuracy; opportunities to catch its errors; evaluating the severity of potential harms; and the likelihood that injured patients could be indemnified.
Healthcare organizations can begin their liability risk assessments with the following considerations:
Evaluate risks individually per AI tool . Resist the temptation to lump applications together.Beware of distribution shift . AI-powered applications grow into the shape of the data they’re trained on. Therefore, organizations must evaluate the fit between application and patient population—and continue to reassess that fit over time.- Remember that informed consent is a process. At teaching institutions or with AI applications,
the question of delegating consent conversations can arise, but recent cases demonstratethe risks of attempting to delegate the informed consent conversation . At this juncture, delegating the important role of informed consent to any form of augmented intelligence may be ill-advised. - Anticipate the need to present evidence in future AI-related litigation.
Adjust documentation habits accordingly.
Richard F. Cahill, JD, is Vice President and Associate General Counsel of The Doctors Company, Part of TDC Group
Articles in this issue
over 1 year ago
Second half success: Texting to power patient volumeover 1 year ago
10 things more important than moneyNewsletter
Optimize your practice with the Physicians Practice newsletter, offering management pearls, leadership tips, and business strategies tailored for practice administrators and physicians of any specialty.














