
The problem with clinical AI
Clinical AI stalls when it’s just transcription; value-based care needs trusted, workflow-embedded copilots with longitudinal context.
Clinical AI tools are everywhere right now: listening in on visits, drafting notes, surfacing suggestions, and promising to give clinicians time back. On paper, the progress is impressive. In practice, many of these tools stall out.
They’re deployed and quietly ignored. Clinicians ignore recommendations. Administrators see modest efficiency gains but little movement in outcomes. Documentation improves somewhat, yet performance under value-based contracts remains stubbornly hard.
This isn’t because AI hasn’t matured. It’s because most AI tools stop short of what value-based care actually demands.
The real constraint isn’t transcription quality. It’s whether the AI can serve as a true clinical copilot, deeply integrated into the clinical workflow, that earns enough trust, context, and relevance to change what happens in the visit, where value-based care is ultimately won or lost.
Transcription is not clinical AI
Ambient documentation is a meaningful step forward. Capturing what was said in the room matters. But transcripts alone do not constitute understanding.
Value-based care depends on comprehensive, longitudinal context: what is known about a patient across years of visits, conditions, medications, and labs. A clean note that does an excellent job covering what was discussed in the last fifteen minutes but lacks that historical context will miss diagnoses and risk, leaving care gaps untouched.
This is where many of today’s AI tools fall short. They optimize for producing better notes rather than better decisions. They treat the visit as a standalone event instead of one moment in a long clinical story.
A true health care copilot has to bridge that gap. It must connect the in-visit conversation to the full patient record, across EHR data, historical diagnoses, prior gaps, and longitudinal patterns. That way, what it surfaces in the moment is both accurate and actionable.
Without the full contextual view, efficiency gains plateau quickly, and the deeper promises of value-based care remain unmet.
Value-based care exposes the limits of shallow AI
The shift to value-based care has made the stakes unmistakably clear for health care organizations. Risk adjustment accuracy determines whether organizations are paid for the burden of illness of the populations they serve. Quality performance affects contracts, reputation, and long-term viability. Both depend on seeing the whole patient.
Incomplete documentation creates revenue leakage, audit exposure, and distorted performance signals. Missed conditions and unclosed care gaps compound across panels and reporting periods.
AI tools can help, but only if they operate beyond surface-level automation. Flagging likely diagnoses or care gaps requires longitudinal reasoning: corroborating evidence across time, reconciling inconsistencies, and surfacing insights clinicians can trust.
When AI tools lack that depth, they risk creating false confidence. Organizations believe they’re “covered” while critical gaps persist underneath.
Clinician trust is the real bottleneck
Clinicians are not skeptical of AI because they resist technology. They’re skeptical because they’ve been trained to be. An AI suggestion without clear evidence is not an asset. It’s a liability.
A suspected diagnosis, a flagged care gap, or a closed measure only matters if the clinician understands why it’s being surfaced and can verify it quickly. Without transparent supporting evidence, even highly accurate systems are ignored. Acceptance rates drop. Alert fatigue rises. The AI becomes background noise.
Trust also depends on workflow. A recommendation that appears at the wrong moment, in the wrong screen, or requires extra clicks is effectively invisible. Sophisticated models lose their value if they don’t align with how clinicians actually work.
The more pressure clinicians face under value-based care, the less tolerance they have for tools that slow them down or ask them to “double check” AI logic. To be successful, AI copilots must reduce cognitive load.
Burnout is a systems problem
Physician burnout is often framed as a matter of workload. In reality, it’s a matter of uncertainty under pressure.
Just look at what clinicians are being asked to do. Every day, they’re expected to deliver high-quality, fully documented, audit-ready care while navigating fragmented data and evolving performance requirements. Risk adjustment and quality workflows amplify that strain, especially when clinicians are left to reconcile gaps after the visit.
AI can help alleviate that tension by bringing clarity forward into the encounter itself. When clinicians walk into a visit already oriented to the patient’s full context, clinical decisions and documentation become more grounded. That shift can meaningfully reduce burnout.
When the system works, patient benefit follows
For all our talk of systemic issues and administrative burden, it’s easy to lose sight of the fact that patients are major beneficiaries of effective AI copilots. They benefit when missed conditions are caught earlier, when care gaps are addressed at the right moment, and when clinicians can focus on the person in front of them instead of the chart behind them.
Those outcomes are not separate from organizational performance or clinician experience. They are the downstream result of systems that surface the right information, at the right time, with the right level of confidence. When copilots get that right, patient outcomes improve quite naturally.
A higher bar for clinical AI
Not all AI copilots are created equal. The difference isn’t who uses the most advanced model or captures the cleanest transcript. It’s whether the copilot meaningfully connects longitudinal patient intelligence to actionable insights that clinicians can trust before, during, and after the visit, and ultimately drives improved patient outcomes.
That is the threshold at which efficiency turns into performance, performance into sustainability, and sustainability into better care.
The technology to reach that standard already exists. Health care organizations would be wise to deploy it.
Shay Perera is the co-founder and Chief Technology Officer of Navina, a artificial intelligence (AI) company transforming how clinicians access and use health data.
Newsletter
Optimize your practice with the Physicians Practice newsletter, offering management pearls, leadership tips, and business strategies tailored for practice administrators and physicians of any specialty.






