Generative AI tools enable providers or their vendor partners to quickly ingest all manner of content to help patients find information, care, locations and providers without picking up the telephone.
It was just about a year ago that the buzz around generative AI became pervasive. While the media commotion has diminished somewhat, interest remains high – especially among healthcare leaders wanting to leverage its value while not exposing their organizations to the risk of misinformation.
As the industry has explored how best to incorporate these new tools, one primary use case has emerged as delivering significant value to providers: Providing comprehensive and easy-to-find information to patients via virtual assistants on the website, which reduces call volume and removes routine tasks from overextended staff.
Speed-to-value virtual assistants
Generative AI tools enable providers or their vendor partners to quickly ingest all manner of content to help patients find information, care, locations and providers without picking up the telephone. Within hours, the technology can consume information housed on a website or available through Word documents, PDF, spreadsheets, video and other formats. It is processed into a knowledge base and configured – using equally important conversational AI tools – to be presented as responses to an extensive range of patient questions.
In the past, creating these virtual assistants could take weeks or months, and cost tens of thousands of dollars. Now they can be launched within a day or two at a fraction of the price.
To many, using generative AI to deliver information to patients is a no-brainer. But one significant obstacle has hindered adoption: content on websites is often out of date or not complete. Leaders hesitate to launch this new generation of virtual assistants if they are incapable of meeting patient needs and genuinely lightening the provider’s administrative workload.
Auditing the website
These concerns, luckily, can be easily addressed. The first step is to audit the current website. Staff tasked with managing the content – the marketing team, patient experience professionals, contact center or front desk personnel, for example – should review the site. They should evaluate whether it covers information that patients most commonly seek or that the organization wants to make available.
Creating an FAQ page
It’s likely the team will find gaps and inaccuracies. These can be fixed if resources are readily available. If not, however, the organization can compensate by creating a frequently-asked-questions document and adding it as a page to the website. This will feed the generative AI engine with current information. In addition, content on the website can be augmented with other documents like patient education handouts, call-center scripts, provider directories and more.
As the generative AI virtual assistant is created, organizations should also pay attention to the website’s metadata such as title tags, meta descriptions and title headings. These, too, enable generative AI tools to identify, consume and “serve up” information to key audiences. Metadata, of course, should be designed to advance the organization’s SEO strategy and needs to be kept up to date.
Protecting against misinformation
The peak of generative AI buzz was characterized not only by the promise of the technology but also by the perils of misinformation and hallucinations. Horror stories caused many leaders to slam on the brakes and question whether these tools were appropriate for an industry as heavily dependent on accuracy and security as healthcare.
Concerns have been addressed, however, by realization that organizations can fully control the information being used as the source for generative AI solutions. Providers do not need to turn their virtual assistants loose on the world wide web. Instead, they can limit the data sources to those they know are accurate: their own documents and content. If the organization uses generative AI to ingest only vetted and approved information, they have eliminated the opportunities for misinformation to contaminate the output.
A second layer of security can be added by balancing automated ingestion with internal review. After generative AI technologies have consumed identified documents and processed them into a knowledge base, subject matter experts can review the information. If needed, they can edit or modify what appears and how it is presented.
While patient-facing virtual assistants are the most obvious application of generative AI, one other use case is gaining ground. Providers can use the technology to draw from existing content – protocols and pathways, for instance, that guide patients in preparation for a procedure – to create guidance for similar services. Perhaps a large health system offers a range of orthopedic procedures across various facilities. Core pathways for knee surgery can be developed and then replicated – with necessary variations – to support other procedures like hip replacements. The opportunity to draw on existing and proven information and automate variations with minimal staff input saves significant time, money and effort.
Observing the development of generative AI use cases over the past year has been fascinating. As with many other advances, it has taken time for the core value to be recognized. But healthcare visionaries have already begun to report measurable results as they leverage the technology to reduce demands on clinical and administrative staff – and, in the process, deliver a more satisfying consumer experience to patients.
Patty Riskind is CEO of Orbita.