Benefits will come when physicians and health experts begin integrating large language models into practices, hospitals and insurance companies.
There has been a lot of speculation about how the health care sector will adopt generative artificial intelligence (AI), from enhanced medical imaging, to deploying chatbots to auto-draft responses to some of the most common and time-intensive patient messages.
While those opportunities seem far off, others can be seized immediately to improve the patient and health care provider experience, as well as save costs across the sector.
The most powerful near-term use of large language model tools, such as ChatGPT or Bard, will be helping internal experts speed up the work they’re doing in order to ensure a greater level of accuracy for patients, drive value and reduce costs for patients, providers and insurance companies.
To understand why I say that, let’s look deeper at what large language models (LLMs) are and how they function.
By now, many people have discovered that large language models on their own aren't reliable sources of knowledge. Their value lies in their ability to reason about information and data, which is the real power of the AI. By providing the language model with access to internal data, such as hospital or insurance billing data, and even readmission rates (as discussed below), it can reason and make comparisons using that data. This ensures that the model goes beyond generating generic responses or hallucinations, and incorporates and analyzes relevant data, as we’ll see below.
This approach also reduces the risk of the model producing unreliable or inaccurate outputs, as it can leverage additional information made available to it.
LLMs can help internal data analysts within a hospital find ways to optimize revenue and ensure compliance. With access to comprehensive information, LLMs can provide internal analysts quick and easy access to a vast amount of information related to hospital billing. They can use the model to ask specific questions in natural language about billing codes, insurance regulations, reimbursement guidelines, or any other billing-related queries. The model can help retrieve relevant information and provide insights to support decision-making.
These models can also make quick work of analyzing complex billing data. By inputting data or queries related to specific billing patterns, revenue trends, or reimbursement rates, the model can help identify patterns, anomalies, or potential areas for improvement. This can aid in optimizing billing processes, identifying discrepancies, or suggesting strategies for cost reduction, and supporting patient privacy and data security, along with compliance with the Health Insurance Portability and Accountability Act.
Eventually, they may even help analysts to explore various "what-if" scenarios to test the impact of policy changes on revenue and compliance.
It can also help in reducing billing errors, which are substantial. Access Project, a Boston-based health care advocacy group, has found that up to 80% of all medical bills have errors in them. In addition, Kaiser Health has reported that medical billing errors account for $68 billion in lost health care spending. Generative AI can provide real-time suggestions and recommendations to coding staff during the coding process. It can help ensure accurate and compliant coding by offering insights into appropriate codes, modifiers, or documentation requirements based on industry-standard coding guidelines.
In a similar vein, generative AI can perform initial-level reasoning to a health insurance company’s data to provide a number of benefits. For instance, the models can assist billing coders by providing real-time suggestions or recommendations during the coding process. As coders input information related to medical procedures or diagnoses, the model can offer suggestions for accurate coding based on industry-standard coding systems such as ICD-10. This helps reduce coding errors and ensures invoices reflect the correct procedures performed.
Models can also analyze invoices or claims data to identify potential errors or discrepancies. By comparing the provided information with established coding guidelines and industry standards, the model can flag inconsistencies or potential coding mistakes. This allows insurance companies to catch and rectify errors before invoices are sent out.
Generative AI models can be trained to understand and apply complex billing rules and regulations specific to insurance policies. By leveraging this knowledge, the model can validate invoices against predefined rules and criteria. It can highlight any noncompliant or questionable billing practices, helping insurance companies maintain accuracy and compliance.
Generative AI has the potential to help clinicians better predict patients at risk of readmissions. By adjusting care plans and improving adherence through prioritizing channels that best support patients, it can enhance the clinician's ability to prioritize and engage with patients who have a higher likelihood of being readmitted to the health care system.
By considering various data such as billing, insurance, treatments and health history, generative AI can provide valuable insights. This is particularly significant in the context of value-based care, where health care providers are reimbursed based on the outcomes they achieve for patients rather than solely on the services rendered.
Both hospitals and insurance companies can launch generative AI chatbots to enhance the patient experience by answering billing and other routine questions.
Let’s say a user asks a question related to hospital procedures or costs. The LLM can analyze the billing data to provide more accurate and contextually relevant answers. For instance, it can provide specific cost estimates for procedures based on the historical billing records. It can also consider patient geography, insurance plan, or other factors to provide personalized insights into billing and payment options.
All the examples above would require prompt engineering as well. Chatbots just happen to be one of the simplest applications of that. But careful prompt engineering is critical to improve the reliability of chat interfaces.
Generative AI, like all AI, requires human intervention to assess its outputs and make necessary corrections. Internal experts, such as hospital and insurance analysts, possess the necessary expertise to question the AI's outputs and provide guidance or prompts to ensure the accuracy of the generated information.
This is why we believe that the best immediate opportunity for generative AI in health care lies in assisting internal experts do their jobs more efficiently and arrive at critical insights sooner. Empowering employees with AI in this way improves their overall experience and, in turn, creates better experiences for patients. Moving forward with generative AI now will transform the total health care experience, paving the way for wider access, increased adherence, and better overall outcomes.
Trenholm Ninestein, senior director product and digital health lead at Rightpoint, is a veteran product leader with a passion for mobile design and behavior change. Equipped with a background in film, television, and radio, his art of storytelling has adapted to delivering innovative technologies. He has spent the last 15 years focused on delivering mobile-first solutions in education, staffing and health care.