AI-driven ambient listening tools show promise in health care but require careful oversight to ensure accurate documentation and patient safety.
Jay Anders, M.D., Chief Medical Officer, Medicomp Systems
I was recently reviewing a family member’s clinical visit notes and quickly realized that things just didn’t add up. I found some glaring errors that didn’t make sense - and then I learned the practice used artificial intelligence-driven ambient listening tools to document clinical visits.
One take-away from that experience was that AI-powered ambient listening – though very promising – has a way to go in terms of creating accurate, verifiable and reliable documentation. At a time when the adoption of these tools continues to grow across medical settings, the potential flaws should be concerning for all of us.
To be clear: I’m not against AI scribes or ambient listening tools – I simply think the technology needs to mature a bit more to live up to the hype and support the delivery of high-quality patient care. In the meantime, we need to exercise caution and implement some human and technological safeguards.
Ambient listening tools are well-positioned to help tackle physician burnout and reduce documentation burdens, allowing providers to spend more time face-to-face with patients. But when the technology adds the occasional symptom or diagnosis that does not align with the patient presentation, or switches a patient’s sex from male to female in the middle of the note, it’s critical that we call out the potential limitations of these technologies.
In the same way that busy and overwhelmed clinicians sometimes sign off on transcription without reviewing the notes in detail, it might be tempting to do the same with documentation created via ambient listening. However, to minimize risks to patient safety and to avoid downstream problems with care coordination, billing, and insurance, clinicians must take a more measured approach when embracing these tools.
The impact of bad data
Once bad data becomes part of a patient’s medical record, fixing the errors can be challenging - and the potential consequences of errors can be far-reaching. For example, an incorrectly documented diagnosis can trigger incorrect coding for billing, improper follow-up care, and even incorrect assignment of risk under value-based care programs.
For patients, these errors may also be a source of frustration and anxiety. Could you imagine the alarm a patient might feel if, for example, he reads his clinical summary and finds he is (incorrectly) labeled with a terminal disease? Even if the patient knows it’s an error, he may be near-powerless to fix the mistake, especially if the note had already been shared with other providers.
According to a recently published survey by the American Medical Association, I am not the only physician concerned with the growing use of AI tools in health care. Notably, survey respondents emphasized the need for such things as feedback loops and data privacy assurances, with 47% ranking increased oversight as the top regulatory action needed to increase their trust in adopting AI tools.
Human oversight and technological safeguards
Regardless of future regulatory action, the provider will always be the one ultimately responsible for the accuracy of a patient’s medical record. It’s thus essential that we keep a “human in the loop” to verify the accuracy of AI-generated clinical documentation.
In addition to human oversight, clinicians need backend technologies to ensure the accuracy and appropriateness of AI-generated clinical documentation. After ambient listening captures the clinician-patient interaction, the conversation needs to be transformed into high-quality structured data and validated against a vetted source of truth, which requires technology that can work behind the scenes to identify accurate diagnoses and billing codes. To be truly actionable and timesaving, that information also needs to be integrated seamlessly within the EHR, connecting related clinical information and actions to assist the doctor in treating the patient.
Set it but check it
It’s still early days with AI-assisted technologies, and like many potentially game-changing technologies, we need to exercise caution. Until the tools are perfected, we need real physician oversight and the integration of validation tools to ensure data is accurate and supports quality patient care, efficient workflows, and accurate billing. For now, when it comes to AI-based ambient listening tools, the best approach may be to set it - but check it.
2 Commerce Drive
Cranbury, NJ 08512