The Spectrum of AI in Healthcare: Understanding the Levels of Intelligence in AI

Key Takeaways:

  • There is enormous potential for leveraging AI in healthcare, including removing toil from staff workloads, increasing efficiency and productivity, improving consistency in results, and
  • It can be difficult to truly evaluate AI powered solutions due to buzzword inflation, and the fact that AI is an umbrella term covering a wide range of technology
  • Asking some foundational questions can be key to truly understanding if an AI powered solution aligns with your business need.

Iodine Intelligence tackles a new challenge in healthcare’s mid-revenue cycle every month, shedding light on problems and solutions and sharing valuable insights from industry experts. Listen to Episode 12 The Spectrum of AI in Healthcare: Understanding the Levels of Intelligence in AI to learn more.

Artificial Intelligence (AI) has been an area of significant interest for the healthcare industry for years, and that interest is only growing in the face of prevailing financial headwinds and staffing shortages. However, in a crowded field of AI powered solutions promising to solve healthcare’s more pressing issues, it can be difficult to truly evaluate their capabilities and potential. In this month’s episode, Priti Shah, Iodine’s Chief Product and Technology officer, provides a framework for making sense of AI and its potential, and some of its applications in the mid-revenue cycle space.

The Promise of AI

The real world applications for leveraging AI to solve some of the pain points in healthcare can be sorted into a few broad buckets:

  1. Automation: One of the most basic applications is simply automating tasks that you don’t actually need a human to perform.
  2. Efficiency: If you have a task that can’t be automated entirely, it still requires human judgement, you can still supplement and enhance your staff with AI-powered tools.
  3. Timeliness: You can ask an AI model to actively evaluate your entire patient census 24/7, which, realistically, you will never be able to staff humans to that degree.
  4. Consistency: An AI model can help establish a baseline level of competence, rather than relying on strengths and weaknesses of individual staff with different experiences and clinical background.

AI is Confusing

Although recognizing the promise of AI is easy, evaluating an AI powered solution can become confusing quickly, because the truth is not all AI is the same, and not all AI can solve all of healthcare’s unique problems. 

Today, there are two main barriers to understanding and evaluating AI. The first is that AI is just a hot space right now and the term gets bandied around a lot. Everyone is trying to claim that they use some form of AI, and they all mean something slightly different when they make that claim. The second problem is even within the field of computer science AI is not well defined, it’s an umbrella term that covers a lot of different tools and technologies for solving a lot of different types of problems. The only real common theme is: applying computer systems to perform tasks that normally require human intelligence because they’re too hard or complicated for computers.

AI models are constantly walking that tightrope, balancing precision and sensitivity, and you can’t over-pivot on either axis because it essentially renders that model impractical to use. But that also means we have to understand no model is perfect, and you have to chose what balance of false negatives and false positives you can live with.
” AI’s accuracy, or success, is always on a spectrum, and there’s always tradeoffs that I think we should be aware of.

PRITI SHAH, CHIEF PRODUCT AND TECHNOLOGY OFFICER

De-Mystifying AI

Knowing the benefits that AI can bring you, but acknowledging the challenges of truly understanding and evaluating AI, Priti Shah offers the following framework for thinking about AI, with some basic, fundamental questions to have in the back of your mind when thinking about making an investment in an AI powered tool.

Use Case

The first question to ask yourself is: what problem are you trying to solve?

This is important for two reasons, one, there’s a wide range of AI tools and they’re not all equally good at all tasks, and two, almost every AI model you’re going to encounter right now is trained to do one very specific task. 

There are different tools available in the AI space that are better suited for solving some types of problems than others. 

  • Documentation Interpretation: NLP or large language models
  • Classification Problems: Gradient boosted machines or neural networks
  • Image Recognition: Deep learning models

Different tools have different applicability to different problems, so you shouldn’t just focus on the latest and greatest. ChatGPT is currently the talk of the own, and while there are some things it does very well, its limitations have been well demonstrated. While powerful in its domain, no one’s trusting it to make a clinical diagnosis.

There is no silver bullet, you should not expect any one technology to solve all your problems, so when selecting an AI for investment, focus on: what is this AI actually trying to do for me, and does that match up with the business problem that I am trying to solve.

Performance

The second question to ask yourself is: how well does the model actually perform?

You can’t assume that just because a solution is powered by AI, the AI is highly successful. While some may think, “If the AI is trained to do this, it’s doing it perfectly,” that’s rarely the case. The reality is AI’s success is always on a spectrum, and generally that comes with trade-offs that you need to be aware of.

Artificial intelligence models are always balancing sensitivity and precision. You can create a model that’s so precise it has no false positives, but your criteria will be so narrow you’ll miss most of what you’re searching for, and have a ton of false negatives. Conversely, you can create a model that is incredibly sensitive, but by casting a wider net you’ll also catch a bunch of false positives.

Dialing up either precision or sensitivity too high can result in a model that is essentially useless for practical purposes, so you have to choose what balance of false positives and false negatives you can live with.

Timing

The third question to ask yourself when evaluating AI is: when is it capable of making its predictions. 

An example of this coming into play would be giving your sepsis coordinators an AI model that can predict sepsis. If the model is only capable of making its predictions post-discharge, it doesn’t actually fit the use case of your sepsis coordinators, who want to identify sepsis within 24 hours of a patient’s stay. Timeframe is critical when making a decision about deploying AI.

Explainability

And the final thing you should consider is: how much insight can the model actually give you into why it’s making its predictions. 

There are use cases where all you care about is the answer and how confident the model is in the answer. But when it comes to healthcare, where you’re interacting with other people, and you’re dealing with someone’s health information, it cannot be a black box. You have to be able to discuss and explain why is the AI making this determination. 

Especially because, consider what we discussed earlier: that no AI is perfect, that it’s always a balance of sensitivity and precision, and therefore also a balance of false positives and false negatives. You need clear explanations of the predictions so you can know when to trust the system’s predictions, and when to apply your own judgment.

Iodine and AI

To further help conceptualize this, below are example of an AI company answering these four fundamental questions.

  • What problem are we trying to solve
    • At Iodine, we leverage machine learning models to take in raw clinical data (lab results, ordered medications, performed treatments, clinician observations, etc.) and look at all those disparate pieces of data in conjunction with one another to make predictions about various disease conditions. We leverage those clinical predictions in a variety of ways. In the CDI space, we compare the clinical reality of the patient against what’s documented, and then look for gaps in between that can be clarified. So when it’s time to bill and code, the documentation is complete and accurate, and health systems will get paid. In the utilization management space, we compare the clinical reality of the patient to the level of care we would expect a patient like that to receive, to aid UM nurses with determining the appropriate level of care and admission status for patients
  • How well do our models perform
    • Iodine is fortunate enough to have access to a vast clinical datasdet, and this enormous set of clinical data is fueling our models. Having more data enables the ability to target more rare diseases. We’ve also been iterating, and experimenting, and improving on our models for seven years. Data science is a process of discovery; for some of the more complicated disease states, we have gone through seven or eight different generations, each new version building upon previous advancements to increase performance. Across our client cohort, we’ve found that we’ve been able to substantially improve client’s metrics: 92% of facilities saw an increase in productivity (query volume) with the average facility generating more than twice as many queries per CDS as they did before Iodine. With our models surfacing those cases with the greatest likelihood of opportunity, CDI specialists were two-thirds more likely to query a reviewed patient. And this has real, measurable impact, including increased MCC capture rates, increased CMI, and, on average, an additional $3.5M in annual appropriate reimbursements per 10k admission.  
  • When are our models making their predictions
    • Our models are working concurrent with the patient stay, and our models are constantly reevaluating in real time as new information becomes available
  • How much insight do we give into why the model made its predictions. 
    • We bubble up the most relevant clinical evidence to our users so they can see why we think this way about a patient. We’re not speaking about a patient in the abstract, or this general type of patient, it’s specifically: why does the model think this patient, Jane Smith, has sepsis based on the way she’s presented so far. 

Hopefully this framework is helpful for evaluating AI solutions, and having these four fundamental questions in the back of your mind will help to demystify the hype around AI, explain the different types of AI, and why it should be something you’re considering, but considering for the right reasons. 


Interested in Being on the Show?

Iodine Software’s mission has been to change healthcare by applying our deep experience in healthcare along with the latest technologies like machine learning to improve patient care. The Iodine Intelligence podcast is always looking for leaders in the healthcare technology space to further the conversation in how technology and clinicians can work together to empower intelligent care. if that sounds like you, we want to hear from you!

Progression of Accuracy with Lance Eason

Key Takeaways:

  • Iodine Software has been iterating on its models for the last seven years, with each new generation unveiling advancements and improvements in accuracy
  • Artificial Intelligence (AI) is an umbrella term, natural language processing (NLP) that merely scans the written record is not the same as machine learning (ML) which surfaces clinical predictions based on the clinical evidence

Iodine Intelligence tackles a new challenge in healthcare’s mid-revenue cycle every month, shedding light on problems and solutions and sharing valuable insights from industry experts. Listen to Episode 5: Progression of Accuracy with Lance Eason to learn more.

Data analytics is nothing new, and big data is leveraged by everyone from Google in targeting ads to Starbucks in picking the best location for a new store, however, surfacing clinical insights out of healthcare data is an incredible challenge. Iodine’s CognitiveML seeks to conquer this final frontier: processing raw clinical data to make precise predictions about specific disease conditions.

While there is an incredible range of tools at one’s disposal (with dozens of different model types that can be applied to a problem) each disease has its own idiosyncrasies, and the same model often doesn’t work equally well for every type of disease state. While some conditions, such as an electrolyte balance, may be relatively straightforward to predict, other conditions are incredibly complex.

The genesis of Iodine’s machine-learning models lies, in part, in the inadequacies of traditional methods in the face of complicated conditions, namely: sepsis. F1 scores are frequently used when measuring a model’s accuracy, and examine both the number of predictions made and the number of predictions which are correct; the profoundly limiting nature of a rules-based approach resulted in an F1 of only 0.21 in the original sepsis model. This spurred Iodine into investigating different technologies; the first machine-learning model applied to sepsis more than doubled the model’s accuracy from 0.21 to 0.53. Since its original launch, Iodine has gone through numerous iterations of the sepsis model, with each subsequent generation building upon previous advancements and introducing new improvements. Iodine’s F1 for sepsis is now in the 0.80 range, with plans to continue experimenting, iterating, and improving.

If we had gone and said, ‘We are using AI to predict sepsis’ seven years ago, and we say it today, we’re saying two different things, because our predictions at that time were 0.5 versus 0.8 nowadays. So there’s been a significant increase in the actual quality of the predictions we’re making over time”.”
– LANCE EASON, Chief DATA SCIENTIST

These improvements stem from a variety of advancements and experiments,
including:

  • a growing pool of data
  • mapping of input values (i.e. labeling lab values)
  • applying successful models to similar disease conditions (ex. UTIs and pneumonia are both infectious diseases, a technique that is successful for one might be similarly successful with the other)

These combine to create a ML-AI engine seven years in the making that is now
uncatchable.

Competitors wishing to switch to a similar format, would be starting almost a decade behind, but the reality is, many of the other technologies in the clinical documentation integrity (CDI) space take a radically different approach to solving the problem of documentation leakage: focusing on interpreting the clinical documentation to determine what’s missing. “The problem with that approach,” says Lane Eason, Chief Data Scientist, “is the documentation itself is not where most of the CDI opportunity is. What CDI is about is making sure what is documented for the patient matches what’s actually going on clinically with the patient, and if you’re not looking at that other side of the equation…then just reading the documentation is not going to tell you all the things that aren’t documented.”

AI is a wide umbrella that encompasses everything from NLP to image recognition, and the degree of complexity and ability varies across technologies. Claiming to leverage AI because you use NLP to scan documentation is not equivalent to using machine-learning to make clinical predictions about specific disease states based on the clinical evidence. And as Iodine itself learned in the beginning, the method of AI applied can have dramatic affects on the efficacy and accuracy of the results.


Interested in Being on the Show?

Iodine Software’s mission has been to change healthcare by applying our deep experience in healthcare along with the latest technologies like machine learning to improve patient care. The Iodine Intelligence podcast is always looking for leaders in the healthcare technology space to further the conversation in how technology and clinicians can work together to empower intelligent care. if that sounds like you, we want to hear from you!

Rules-Based Prioritization Versus Machine Learning with Lance Eason and Troy Wasilefsky

Key Takeaways:

  • Rules-based systems are fundamentally limited
  • Thresholds lead to patients with clinical evidence for a disease state being ignored because they don’t meet the cut-off
  • The mid-revenue cycle is inherently clinical in nature, and solutions lie in establishing a source of clinical truth for a patient
  • Machine learning allows for more nuanced, accurate predictions

Iodine Intelligence tackles a new challenge in healthcare’s mid-revenue cycle every month, shedding light on problems and solutions and sharing valuable insights from industry experts. Listen to Episode 2: Rules-Based Prioritization Versus Machine Learning with Lance Eason and Troy Wasilefsky to learn more.

Hospitals and healthcare providers are currently facing enormous staffing challenges, and Clinical Documentation Integrity departments are not exempt. These workforce difficulties are likely here to stay, and with limited human resources, CDI departments turn to technology solutions to help scale their teams and improve the process. (Listen to Episode 1 here to learn more about the staffing challenges CDI departments face, and how hiring more specialists isn’t the answer, but deploying technology with AI is). 

There are a wide variety of products on the market purporting to solve the issues CDI teams face. Many leverage AI to help, but not all AI is created equal. Some adopt a rules-based approach to help identify cases in need of review, but software based off rules, thresholds, and cut-offs is fundamentally limited. 

There are several core problems with a rules-based approach: 

  1. Arbitrary cutoffs. This type of technology is forced to create a confidence threshold, for example: how high do lactate levels need to be for me to be 80% confident this patient has sepsis? You will inevitably have patients who fall just below this threshold. The system ends up discarding patients who don’t have enough clinical evidence, despite the fact that there is some.
  2. A limited number of factors are examined. Typically rules-based systems only examine 4 or 5 criteria, providing a very narrow, limited view that doesn’t allow for the capture of the patient’s full clinical picture. 
  3. Factors are examined independently. Rather than examining patient symptoms in conjunction with one another, they are all examined independently (for example, lactic acid levels and heart rate are looked at separately). The combination of a small number of factors, and looking at these factors independently, means there is a lot of opportunity for patients who do have clinical evidence of a disease state, still not meeting the letter of the law in criteria. 
  4. Divorced from the clinical nature of the task. Rules-based systems approach the problem algorithmically, trying to diagnose a patient based on a set of rules, which is not how medicine works.

The failures of these systems lead to false positives, false negatives, and ultimately loss in confidence from the CDI team. CDI specialists stop using the product, go back to trying to review every case every day, and are essentially back to square one. 

“Because we are actually unlocking the ability to understand what’s clinically going on with each patient, that allows us to tell our customers: specifically for this patient here, here is the intervention that’s necessary, here’s what you need to look at for this patient. And we don’t speak in generalities.”
– LANCE EASON, Chief DATA SCIENTIST

Machine Learning provides a solution/answer to each of the core failings of rules based systems. 

Machine learning doesn’t utilize thresholds or cut-offs, and is often examining dozens of factors, not just four or five. While these factors may not all be slam dunk criteria giving definitive answers, they all contribute to a much more nuanced understanding of the patient. Additionally, these factors are all examined in conjunction with each other. Ultimately, what machine learning excels at is recognizing patterns in large pools of data. Diseases have multiple ways they can present, and machine learning is capable of identifying disparate clusters of symptoms that nonetheless are all indicative of the same disease state. At Iodine, the goal is to establish a source of truth: a complete clinical picture of what ha[[ened to the patient. By unlocking the ability to truly and accurately understand what’s clinically going on with each patient, we’re able to guide our clients specifically, rather than speaking in generalities.     

While it’s critical for technology aimed at improving the CDI process to leverage AI, the type of AI powering the tools is just as important. Teams using software that relies solely on thresholds and cut-offs with a rules-based system are likely to find themselves sorely disappointed in their results – and that the tool purchased to streamline processes and introduce efficiency and accuracy into the department, does anything but.


Interested in Being on the Show?

Iodine Software’s mission has been to change healthcare by applying our deep experience in healthcare along with the latest technologies like machine learning to improve patient care. The Iodine Intelligence podcast is always looking for leaders in the healthcare technology space to further the conversation in how technology and clinicians can work together to empower intelligent care. if that sounds like you, we want to hear from you!

Interview at AHIMA 2020: Reducing Revenue Leakage with Cognitive Emulation

Iodine Software was interviewed at AHIMA 2020 on how HIM leaders can leverage AI and machine learning to reduce revenue cycle leakage. Iodine has pioneered a new machine learning approach called Cognitive Emulation™, and most recently launched the AwareCDI™ Suite. Listen to a recording of the interview here and read the full excerpt below. 

AHIMA: Can you talk about the problems that Iodine is seeing when it comes to mid-revenue cycle leakage?

IODINE SOFTWARE: When it comes to the mid-revenue cycle, it’s critical that the full clinical picture as reflected in the evidence is correctly, accurately, and with detail documented, and then fully represented in the code. Unfortunately, this can cause problems due to the fact that humans are involved at every step, and that the underlying legacy software is focused only on workflows that aren’t holistically solving any of these problems. 

For example:

  1. There aren’t enough CDI personnel to review every case every day, which is necessary to ensure documentation integrity. 
  2. Even when pointed to and reviewing the right case, there’s a substantial loss of integrity at the point of decision to query. 
  3. When the query is written, there are fall offs both in physician response and agree rates. 
  4. And finally, there’s further loss of integrity at the coding step due to lack of clinical competency, poor communication, and failure to cross-connect evidence / documentation/code.

What this results in is lost “earned revenue”, which can significantly impact organizations. 

AHIMA: Could you help us better understand the magnitude of this leakage? 

IODINE SOFTWARE: Prior to the start of COVID-19, health systems were already operating on generally thin margins, with many finance leaders acknowledging that a significant root cause was leakage from their mid-revenue cycle and that “average performance” was still well below optimal results. For the average 250-bed hospital, that is $4.7-11M1 in revenue each year

Today, the world is different. Complacency has been fast replaced by a new urgency, and the traditional approach to solving this problem — hiring more staff — is no longer feasible as highly trained and specialized staff to do clinical documentation are in short supply.

We can no longer afford to effectively ‘earn dollars’ only then not to realize them, solely because of unintentional, clerical and clinical human error in documentation and coding. Failing to get this right could mean the difference between positive or negative operating margin, which impacts our real mission – delivering the highest quality clinical care, sustainably.

With this new normal as our backdrop, finance leaders are looking at how to best leverage technology to do things differently – now – and ensure their organizations are financially resilient for the next decade and beyond.

AHIMA: Can you tell me about Iodine’s Cognitive Emulation approach, and what makes it different from others on the market? 

IODINE SOFTWARE: Today, most healthcare technology solutions that support revenue cycle billing, coding and documentation teams use systems and workflows that “think” like computers – not clinicians. They leverage rules and check-lists, which only consider narrative documentation and can lead to unforeseen errors given the many nuances of the healthcare revenue cycle. 

At Iodine, we take a different approach. Cognitive Emulation applies physician-like judgment to the clinical evidence in a patient’s chart and leverages previous learnings to more accurately determine the likelihood a condition exists. Conditions often present in a variety of ways, and by relying on clinical evidence rather than ambiguous thresholds, Iodine is able to identify and learn from these unique instances.

We’re the only organization with the capability of quantifying the magnitude of this problem with precision. And now, we’re the organization uniquely equipped to address it. For each of the leakage points that I talked about earlier, we’ve built and deployed software modules, with each one emulating clinical judgement to solve this earned revenue leakage problem. All these components seamlessly integrate in a unified suite that we call AwareCDI, and powered by our core AI/machine learning technology, Cognitive Emulation.

AHIMA: How could an HIM leader leverage the AwareCDI Suite?

IODINE SOFTWARE: One of our newest products, and an example of how we apply Cognitive Emulation to the mid-revenue cycle, is Retrospect. Retrospective reviews are often the last opportunity to resolve documentation and coding issues prior to final submission of codes for billing and quality reporting purposes. With up to 25% of post-discharge reviews resulting in meaningful education opportunities or code changes that can lead to revenue impact, this final inspection is business-critical. However, this would require the review of every single discharged record to ensure full integrity of each and every outgoing code—which is impossible to do without technology.

At Iodine, we ease the burden on CDI and coding teams by automatically reviewing every record prior to billing. Retrospect provides reconcilers with clear and actionable information to review the right cases at the right time, calling out specific opportunities to clarify documentation and/or final codes in order to improve review confidence and query quality.

We have several clients that are currently utilizing the first version of Retrospect, and the results are pretty amazing.  What we are seeing in our early adopters is that about 30% of cases reviewed in Retrospect resulted in coding changes that impacted the final DRG. Through the use of our CognitiveML engine and prioritization, we were able to support a post discharge workflow that impacted final codes in greater than 60% of cases reviewed.  

To learn more about Iodine and the AwareCDI Suite, click here

¹ 2016 ACDIS Advisory Board Study 

Cognitive Emulation: Insights from Iodine Data Scientists

By Lance Eason, Chief Data Scientist & Jon Matthews, Data Science Manager 

Q: Iodine has pioneered a new machine learning approach called Cognitive Emulation. How would you summarize this approach?

Lance Eason (LE): There are two sides of the picture to consider when thinking about clinical documentation integrity (CDI): 1) what is really happening to the patient, and 2) what has been documented and written about the patient. CDI is all about making those two pictures align, specifically making sure the documentation actually reflects what is clinically going on with the patient. Iodine’s Cognitive Emulation approach helps identify where there are discrepancies between these two sides by looking at the entire clinical picture, not just what has been documented or what aligns with certain rules or thresholds.

Q: Iodine has iterated on its intellectual property over time as new technology becomes available. How have different types of artificial intelligence been utilized, and what has been the experience with each?

Jon Matthews (JM) and LE: Iodine started off with a rules-based approach just using NLP, which calculated the probability conditions were present based on a simple set of rules. We started identifying challenges and limitations – issues with false positives and false negatives, for example – so, we decided we needed a more advanced technique. We explored coupling  Machine Learning and advanced Natural Language Processing (NLP) to holistically review the entire patient record, and found the approach worked well. This allowed us to move past surface-level documentation improvements constrained to specificity and capture the full spectrum of possible discrepancies between documentation and clinical reality. 

Q: What underlying technology does Cognitive Emulation rely on, and how is it different from legacy solutions on the market?

LE: Iodine’s combination of machine learning and NLP allows it to make separate judgments about the clinical state of the patient versus what is documented. This is a more statistical approach and does not rely on specific hard factors to determine if the patient’s symptoms are above or below certain markers. Instead, it calculates a statistical likelihood that a documentation opportunity exists based on clinical evidence. We let the technology find the patterns and connections between the data rather than arbitrarily defining rules ourselves.

Q: What is a marker-based or rules-based approach, and how is Iodine different?

JM: Rules- or marker-based solutions require either the client or the software authors to define what each condition means, so there is not a lot of flexibility. These approaches follow “if, then” logic and make simple yes or no decisions based on whether inputs are above or below certain values. The problem with this strategy is that one could spend forever iterating on those thresholds manually and still get poor results. You lose a lot of nuance in the data because what is apparently useful might actually be missing a large subset of important features in the models. Machine learning algorithms are able to find these features on their own and do so automatically, which takes the guesswork and years of iteration out of the picture.

 Q: How is Iodine’s Cognitive Emulation approach impacting healthcare?

JM: The biggest advantage of Iodine’s Cognitive Emulation approach is that it is able to leverage the experiences and knowledge of physicians, coders, and CDI Specialists (CDIS) across the country into one product. Iodine’s very broad data set allows it to define new and interesting features that are helpful for predictions, which you would not be able to do if you were just coming up with rules on your own. In this way, Iodine helps hospitals work with the resources they have to capture more opportunities for documentation improvement. With Iodine’s vast dataset, experience, and clinical expertise, we are able to build products that drive revenue integrity and labor optimization. So many functions in a hospital require guessing or people to spend time doing unproductive work. The sky’s the limit for what Iodine can do for health systems with our machine learning models. 

LE: There are a lot of patients in the hospital at any given time, and their medical records are each constantly being updated with new data. The job of a CDIS is to survey all of the patients all of the time. With an infinite amount of CDIS you would review every case every day, but this is unrealistic without the help of technology. Iodine does the job of filtering and prioritizing, emulating what the CDIS would have been doing, but constantly and for every case, so that CDI teams are spending their time on the cases that most require human review. This allows them to do more productive work, not just searching through all of the cases to find the ones to isolate for review. If you think of CDI work as searching for needles in haystacks, queries would be the needles and the hay would be all of the cases that do not require a review. Iodine removes most of the hay so the needles can easily be identified.

Machine Learning versus Natural Language Processing: What is the Difference?

Artificial intelligence is utilized for many use cases across the healthcare industry. However, just as all humans have different cognitive abilities, each type of artificial intelligence is distinct, and some applications are more advanced than others.

Artificial intelligence (AI) is a broad term referring to the field of technology that teaches machines to think and learn in order to perform tasks and solve problems like people. 

There are many types of artificial intelligence, but Iodine focuses on two: 

  1. Natural Language Processing, commonly referred to as NLP, interprets raw, arbitrary written text and transforms it into something a computer can understand.
  2. Machine Learning, a form of applied statistics, solves problems based on large amounts of data by connecting the dots between many inputs without any human intervention. It answers questions similarly to how humans do, but automatically and on a much larger scale. 

What is the difference between the two? NLP interprets written language, whereas Machine Learning makes predictions based on patterns learned from experience. 

Iodine leverages both Machine Learning and NLP to power its CognitiveML™ Engine. These AI technologies work together to analyze, interpret, and understand the information within a patient’s medical record. Specifically, Iodine uses NLP to identify mentions of symptoms, diseases, procedures, medications, anatomical parts, and other key information. These mentions augment discrete clinical evidence including orders, results, medications, and demographic information and are evaluated by Iodine’s machine learning models. Machine learning models are mathematical representations of real world processes that are trained by analyzing vast amounts of data–billions of data points in Iodine’s case. Through pattern matching, these machine learning models emulate physician thought processes to determine the likelihood specific conditions exist and whether there are discrepancies between the patient’s clinical state and what has been documented.

Both NLP and Machine Learning are essential components of Iodine’s technology. Like CDI offerings available from other companies, Iodine uses NLP to determine what the documentation says which can help identify inconsistencies within the documentation and issues with specificity. By itself, however, NLP cannot find many financial and quality accuracy improvement opportunities because:

  • NLP cannot determine when patient information that is supported by medical data is not written in a patient’s chart. 
  • NLP cannot perform clinical validation, in which clinical evidence does not support something that has been documented, which increases the risk of audit. 

The objective of CDI is to determine whether the written documentation aligns with a patient’s clinical reality, and this requires more than NLP. This is where Machine Learning comes in. Machine Learning considers the entire clinical picture and makes connections or predictions based on learnings from other data, including: lab results, vital signs, medications, radiology results.

The result of combining NLP and Machine Learning is intelligent software that evaluates complex medical data similar to a physician’s approach to diagnosing and treating patients. Iodine’s Cognitive Emulation approach (via CognitiveML™ engine) augments the work of health system professionals with software that can actually “think” and learn from each new data point, just as physicians do. Iodine applies physician-like judgment to the clinical evidence in a patient’s chart and leverages previous learnings to more accurately determine the likelihood a condition exists. Conditions often present in a variety of ways, and by relying on clinical evidence rather than ambiguous thresholds, Iodine is able to identify and learn from these unique instances.