UCLA Health was selected as an ‘early adopter’ institution to pilot a GPT-4-driven tool developed by our electronic health record (EHR) vendor. This tool leverages GPT (generative pretrained transformer) to create generated draft responses to patient questions submitted via myUCLAhealth. Physicians at UCLA Health primary care clinics participated in a 5-week pilot from September to October of 2023.
Online patient messaging through EHR portals has opened up a new way to communicate with clinicians on topics like lab results, medication questions, or general health inquiries. At the same time, the volume of these messages has increased exponentially, leading to additional time burden for the clinicians and longer response times for the patients. Leveraging the promise of generative AI, UCLA Health, in collaboration with our EHR vendor, has deployed a GPT-powered solution: automated writing of draft responses to patient messages. The physicians can then review, edit as needed, and send responses to patients, or decline the draft and start from scratch. The goal of the pilot was to assess usage metrics by participating pilot physicians, e.g., number of generated draft responses (GDR) used, positive or negative impressions of GDRs, etc.
For this unique tool, the first generative AI implementation at UCLA, 9 pilot providers were identified across several primary care clinics. Though there were many unknowns, for example developing prompts to fine tune the draft responses, the pilot started within weeks of its proposal. At the same time, representatives from the Health AI Council (HAIC) and OHIA worked with the EHR vendor and model stakeholders to begin the AI model risk assessment process.
Due to the novelty of this implementation, team members extended their involvement beyond traditional roles. For example, Meenakshi Gupta, the application analyst assigned to this project, played a vital role in building and integrating the tool, developing and refining prompts, and meeting weekly with pilot users to ensure project success. OHIA representatives developed new user metrics, in addition to existing metrics provided by the EHR vendor, to monitor usage characteristics, inform prompt edits, and provide valuable information for the Health AI Council for use in its model assessment.
In line with peer institutions, GDRs were used approximately 15-20% of the time during the pilot phase. The messages were noted to be best when the response to the patient’s inquiry required empathy and understanding. Evaluation of this model by HAIC, the first vendor and first generative AI model to complete this review process, provided valuable insight on governance of vendor AI algorithms and especially generative AI that can be applied to future instances.
One of the greatest challenges was incorporating user feedback, which took the form of discrete data and subjective user responses, into improvement of prompts and thus generated draft responses. This required weekly meetings with pilot users to get firsthand impressions. In addition, OHIA analysts and data scientists continue to refine and transform data to glean actionable insight from user experiences.
This project will continue to grow to more providers at additional clinics.
For questions or inquiries, please reach out to OHIA Leadership Team.