- ChatGPT is a free online chat bot that creates custom content at users requests, debugging computer code, composing songs, writing student essays, and more
- While some tout ChatGPT as the next solution for the CDI space, it has a long way to go before it will have real utility to clinical documentation integrity
- ChatGPT is based on Natural Language Processing (NLP) which is not well suited to interpreting and analyzing data to determine what’s missing
- In its current iteration, ChatGPT isn’t capable of producing documentation with enough detail or specificity to replace physician written progress notes or CDS composed queries
Iodine Intelligence tackles a new challenge in healthcare’s mid-revenue cycle every month, shedding light on problems and solutions and sharing valuable insights from industry experts. Listen to Episode 11 – Beyond Buzzwords: Understanding the Real Potential of ChatGPT in CDI to learn more.
OpenAI debuted ChatGPT in November of 2022, and since then there has been discussion on its potential, and the impactions on everything from schoolwork to medicine, from marketing to law school, and Clinical Documentation Integrity hasn’t been excluded from the conversation.
ChatGPT is currently a free online program that works like a chat bot: you type in questions or requests and it spits back answers. Although it’s designed to mimic human conversation, users have found it to be incredibly versatile, using it to write and debug computer codes, compose poems and songs, write student essays, and even take the bar exam.
While ChatGPT is impressive, and certainly fun, it has a long way to go before it’s ready to revolutionize the clinical documentation space.
ChatGPT is founded on Natural Language Processing (NLP), which is already deployed widely in the CDI space. And while NLP can be a powerful tool for taking written text and interpreting it into a form a computer can understand, it is not well positioned to help CDI with their chief concern, namely: what is missing from the documentation. NLP isn’t capable of looking at information and applying any kind of logic or understanding to the underlying meaning. A medical record can speak very specifically about signs and symptoms of a medical condition, but if that medical condition isn’t articulated in the documentation, NLP isn’t capable of finding it. ChatGPT has the same limitation: it’s not trained to identify what the documentation should be. A very different type of AI is needed to interpret and understand documentation.
Additionally, some users are encountering instances where ChatGPT has falsified its answers. Users will ask ChatGPT a question, and get an answer that sounds very scientific and even includes citations, but upon digging deeper, users will discover the answer provided was fake and ChatGPT invented the citations. Additionally, ChatGPT has revealed some concerning biases in the data it’s been trained on, for example answering “describe a good scientist” with “a white man.” Both of these issues can have serious consequences when applied to people’s health and medical records.
Perhaps the largest barrier to ChatGPT in the CDI space is it doesn’t get to the root of the problem – mimicking the way a physician writes progress notes is likely to produce the same gaps we find in documentation now, and with the physician a further step removed there’s the potential for the documentation to be even less accurate – because the physician never touched it. In it’s current iteration, if you ask ChatGPT to write you a progress note or a query, it’s not capable of producing the level of detail necessary because it doesn’t have the right data or learning as a foundation. It would require a huge clinical data set, composed of millions of medical records to effectively emulate a clinician’s brain when creating documentation.
Further, even if ChatGPT was capable of writing a progress note or a query, if its outputs are just large blocks of text, that has limited utility in a modern healthcare setting where data must flow downstream to multiple programs and softwares. Additionally, no matter what ChatGPT requires the author to review its outputs and ensure what was translated into the document is accurate – even with AI you can still end up with typos or words and phrasing not appropriate or accurate for the situation. (Just ask anyone who’s blindly relied on auto-correct in a text message, or auto-completion in an email).
Some better uses for ChatGPT in healthcare might be handling scheduling and frequently asked questions from patients, helping doctors and patients with translations, and writing or drafting emails, including for fighting insurance denials. Use cases in the outpatient setting might also have a lower hurdle to clear, as patient encounters in outpatient are much shorter, so the details needed to be included are more streamlined and limited to this specific encounter, rather than the overall picture of the patient. Even helping with the day-to-day communications between providers, coders, CDI nurses, can be helpful and a time-saver, but as it stands today, at best ChatGPT is getting you a first draft, a human is still required to refine and finalize.
Interested in Being on the Show?
Iodine Software’s mission has been to change healthcare by applying our deep experience in healthcare along with the latest technologies like machine learning to improve patient care. The Iodine Intelligence podcast is always looking for leaders in the healthcare technology space to further the conversation in how technology and clinicians can work together to empower intelligent care. if that sounds like you, we want to hear from you!