• Industry News
  • Access and Reimbursement
  • Law & Malpractice
  • Coding & Documentation
  • Practice Management
  • Finance
  • Technology
  • Patient Engagement & Communications
  • Billing & Collections
  • Staffing & Salary

Practical application of AI in healthcare delivery: Are we ready?


When it comes to certain AI-generated solutions, science fiction has become science now.

robot hand human hand sistine chapel | © wifesun - stock.adobe.com

© wifesun - stock.adobe.com

Imagine a day in the not-too-distant future when one of your patients arrives for a follow-up examination after reporting lower back pain during a routine physical and having had a set of X-rays taken a few days before. The images were processed and reviewed by artificial intelligence (AI), which noted some well-hidden but unusual features in the patient’s upper pelvis; then a recording of the patient’s initial visit with you was analyzed by AI, and a pattern of certain words used by the patient to describe their symptoms and family medical history raised red flags.

Suspecting possible bone cancer, you refer the patient to an oncologist at an out-of-network clinic that is exploring AI-driven precision medicine to formulate individually tailored treatment modalities. The patient’s insurance company denies the referral; however, with the assistance of an AI chatbot, you prepare and submit an appeal that convinces the insurer to cover the patient’s visit to the specialist.

Now imagine that the scenario described above happened this week — and you would not be too far off the mark. When it comes to certain AI-generated solutions, science fiction has become science now.

Artificial intelligence can seem like a too-good-to-be-true solution for an industry facing chronic labor shortages, an increasingly complex reimbursement system, and countless other challenges. At the same time, many healthcare professionals view healthcare AI with a strong sense of skepticism, pointing to a number of known and unknown data privacy, diagnostic, treatment, and other risks that could far outweigh its benefits, and as the treating physician, you would be liable for malpractice.

As with most things, the answer is somewhere in between.

How AI can help physician practices

The goals for healthcare AI solutions are nothing new: improved, cost-effective workflows, more accurate diagnostics, treatments customized to the individual, improved outcomes, and a growing knowledge base that drives future innovation. Among other specific tasks, AI can (or soon will be able to):

  • Streamline scheduling of patient visits, procedures, and follow-up care.
  • Document and analyze patient-provider interactions.
  • Conduct detailed analyses of health records to identify potential patterns and issues.
  • Support effective clinical decision-making.
  • Link patients, emergency-room clinicians, and urgent-care staff — particularly in rural and underserved communities — with specialist physicians anywhere in the world.

At the macro level, AI solutions also can conduct the kinds of high-volume data analyses necessary to support personalized and precision medicine, and treat cancers, diabetes, and other chronic and life-threatening conditions; identify effective cost-reduction and revenue-maximization practices that also achieve positive outcomes; and help public-health agencies take note of and respond quickly to emerging medical issues and disease outbreaks.

Healthcare AI: not quite ready for the driver’s seat

Despite their potential, many AI technologies give rise to many current concerns, especially in healthcare delivery. Critics point to a number of shortcomings in current AI tools that could lead to individual and generalized errors, the consequences of which range from annoying clerical errors to, at worst, mass events that cause patient injuries or even death. However, many companies that have created AI are betting on its use, specifically in the healthcare sector. Hewlett Packard Enterprise’s CEO Antonio Neri recently indicated on CNBC that they are betting on AI, specifically in the healthcare industry segment, which has remained stable.

Among other challenges, existing AI tools struggle to recognize and decipher the speech of patients with accents. Large-language-model and generative AI tools, such as ChatGPT, do more than get facts wrong; sometimes, they experience what is termed “hallucinations” and invent data out of whole cloth (responses that others call, less generously, “lying”).

Another concern is that AI is only as bias-free as its developers and users. Even seemingly minor differences in the words used to pose a question or query can result in different answers. And given the extraordinary amount of data analyzed and the lightning-fast speed of machine learning, a certain percentage of AI researchers acknowledge that they don’t always know what is going on inside the “black box.”

Finally, effective AI algorithms depend on creating vast databases comprising huge amounts of personal medical data. Such databases make a very tempting target for cybercriminals and hackers who want to steal and monetize this information, not to mention bad actors who may want to perpetuate healthcare-related scams against individuals.

Where do we go from here?

In other industries — the automotive industry, for example — developers have taken note of the risks posed by AI and have taken steps to mitigate them. Tesla, for example, does not yet allow full vehicle autonomy in its cars. Even cars with full self-driving capabilities require active driver supervision.

Such direct supervision can also reduce the risks associated with healthcare AI and allow physicians to take advantage of the technology safely. In addition to requiring physician sign-off on any diagnostic or treatment decisions, other solutions include:

  • Improving patient and provider education, which can support more informed medical decisions.
  • Implementing quality oversight panels to oversee the use of AI in healthcare.
  • Expanding the size, scale, diversity, and reliability of healthcare data that can be improved.

As noted above, AI is science now. How well it performs will require human intelligence and oversight by physicians and other clinicians. Successful physicians in practice will utilize AI-driven tools to increase the time spent with their patients and provide better patient care.

Anjali Dooley, MBA, Esq., is special counsel in Jones Walker’s Corporate Practice Group and a member of the Healthcare Industry Team. She provides broad-ranging legal and regulatory counsel to healthcare, technology, venture capital, and private equity clients.

Recent Videos
Physicians Practice | © MJH LifeSciences
MGMA comments on automation of prior authorizations
Fostering wellbeing in your practice
Ike Devji, JD, and Anthony Williams discuss wealth management
Ike Devji, JD and Anthony Williams discuss wealth management issues
Erin Jospe, MD gives expert advice
A group of experts discuss eLearning
Three experts discuss eating disorders
Ike Devji, JD, and Anthony Williams discuss wealth management
Ike Devji, JD and Anthony Williams discuss wealth management issues
© 2024 MJH Life Sciences

All rights reserved.