Interoperability Is Happening — Why Natural Language Processing Is Such an Essential Part of It

Article

Some 80% of the clinical data is in an unstructured or semi-structured form within the notes sections of electronic health records. Natural language processing can "read” those notes and extract information without fatigue.

Payers must have access to accurate member information to accurately assign risk, drive predictive population health models and identify care gaps. But too often their efforts are hampered because clinical data is incomplete and fragmented.

Yacoubian said to ensure interoperability readiness and address other payer requirements, NLP has evolved from a “nice-to-have” technology to an essential “must-have” business tool.

Yacoubian said to ensure interoperability readiness and address other payer requirements, NLP has evolved from a “nice-to-have” technology to an essential “must-have” business tool.

As healthcare organizations work to comply with the interoperability requirements from the Office of the National Coordinator for Health Information Technology’s Trusted Exchange Framework and Common Agreement (TEFCA) regulation, there is a role for payers to play to influence and benefit from better interoperability.

With recent changes to the definition of Electronic Health Information (EHI) having come into force on Oct. 6, 2022, the volume and variety of healthcare data that can now end up at payers’ doorstep is greater than ever. Hospitals, for example, produce 50 petabytes of data per year, comprising clinical notes, lab tests, medical images, sensor readings, genomics and operational and financial data. However, most it — 97%, according to the World Economic Forum— goes unused,

An additional challenge stems from the fact that as much as 80% of the clinical data is in an unstructured or semi-structured form within the notes sections of electronic health records (EHRs). Unstructured data is often messy and inconsistent. As a result, users are unable to easily access and analyze critical information via conventional search methods. This makes it difficult to identify essential data related to members’ health status, including symptoms, disease progression, lifestyle factors, lab test results, and more. All this data is now potentially available to payers, and, with the right strategy, can be used to improve member care, close gaps in care and improve accuracy in processes such as risk adjustment.

TEFCA and the push for more useful data exchange TEFCA, which was part of The 21st Century Cures Act, launched in January 2022. The goal is to establish the technical infrastructure model and governing approach for different health information networks and their users to securely share clinical information with each other, all under commonly agreed-to rules. Healthcare providers and payers that provide plans or services for government programs such as Medicare and Medicaid, are specifically included in the TEFCA mandates. However, the Cures Act anticipates that the new interoperability standards will be adopted by all payers and providers.

The purpose is to create the next generation of healthcare electronic data interoperability that generates more interchange between healthcare organizations, patients and payers, thereby making health data more widely available to improve patient care.

But for payers — and providers — exchanging information that can be easily used and interpreted by the receiver is difficult because so much patient information is buried as unstructured data. Fortunately, new technologies are now available to help users make sense of the vast volumes of unstructured patient data.

How NLP is improving payer efficiencies

For example, more payers today are replacing traditional time-consuming and expensive manual chart searches with artificial intelligence-based tools such as natural language processing (NLP) to enable the rapid analysis of massive amounts of member data.

NLP automates the human ability to understand a natural language, enabling the analysis of unlimited amounts of text-based data without fatigue in a consistent, unbiased manner. Essentially, NLP allows computers to understand the nuanced meaning of clinical language within a given body of text, such as identifying the difference between a patient who is a smoker, a patient who says she quit smoking five years ago and a patient who says she is trying to quit.

With new interoperability requirements payers will need to manage more data than ever before. To ensure interoperability readiness and address other payer requirements, NLP has evolved from a “nice-to-have” technology to an essential “must-have” business tool.

Consider the three use cases below that demonstrate how payers are taking advantage of the power of NLP to increase operational efficiencies. Note that in all these cases the member data stays behind the firewall of the payer and the technology works it.

Improving risk adjustment

Risk adjustment is an essential process to ensure that patient comorbidities are captured through hierarchical condition categories, or HCC codes, which are then used to determine the appropriate funding available to care for patients based on their specific conditions.

A large payer employed NLP to improve the effectiveness of its chart reviews, with the goal of increasing the accuracy of HCC code capture. Chart-review teams have used the NLP tool to streamline workflows and enhance productivity. Specifically, the tool identified features for HCC codes with over 90% accuracy, processing documents between 45 and 100 pages long per patient. The technology enables the company to process millions of documents per hour, a significant improvement over manual chart review.

Driving predictive population health models

A leading payer leveraged NLP to create a model that predicts member risk of developing diabetic foot ulcers, which if left untreated can lead to significant, expensive complications, sometimes even amputation. The model scoured the unstructured text in patient records to surface clues that signal risk of diabetic foot ulcers, including body mass index data, lifestyle factors, comments on medications and documented foot diseases.

The model has enabled the payer to improve the health of this patient population, identifying 155 at-risk patients who could be proactively managed. That identification potentially translates to between $1.5 million and $3.5 million in annual savings for the payer from prevented amputations.

Identifying social determinants of health

Principle 6 of TEFCA states that health information networks should adopt a health equity by design approach. A key strategy to ensure health equity is to capture an accurate picture of member social determinants of health (SDOH). SDOH factors include information on housing, transportation and employment. These are often found only in unstructured sources, such as admissions, discharge, and transfer notes. SDOH information is frequently needed to first identify, then close care gaps for members.

NLP provides a reliable mechanism to surface this information for payers, enabling them to deliver health equity by design.

Complete and accurate member data is essential for payers seeking to optimize their approaches to risk adjustment, predictive modeling and care gap closure. However, with the ever-growing volumes of available data, including information locked in records as unstructured text, more payers are looking to AI-enhanced technology like NLP to help develop critical insights that drive increased efficiencies.

Calum Yacoubian, M.D., is director of NLP strategy for IQVIA.

Recent Videos
1 KOL is featured in this series.
1 KOL is featured in this series.
1 KOL is featured in this series.
1 KOL is featured in this series.
1 KOL is featured in this series.
Related Content
© 2024 MJH Life Sciences

All rights reserved.