This post is adapted from my book LOINC Essentials, your step-by-step guide for getting your local codes mapped to LOINC.
Here I’ll share our best practices for managing terminologies in a health information exchange.
Two big problems
Getting aggregate data from multiple vendor systems across independent healthcare organizations is a holy grail, but there are two big problems related to healthcare data variables that stand in the way.
No matter what kind of healthcare data you’re interested in, if you look across different systems you’ll see different ways of saying the same thing. It’s like the Tower of Babel. Take radiology procedures for example. The local system codes (e.g. XRSC3V, 123456) and names (e.g. XR SC Joints 3V, Sternoclavicular Joints 3 Views) are different for the same exam across sites. Further, sites vary in what kinds of specificity they distinguish in their codes. Some have different codes based on laterality while others don’t. Some name specific views (e.g. PA, lateral) while others only count views (2 views, 3 views, etc). Some specify the contrast agents precisely while others don’t. Further, many systems invent multiple codes for the same test to distinguish among facilities within their network (despite the fact that there are other ways you can know where an exam was performed).
Of course, the way to overcome this variation and achieve the holy grail of integrated data is to map the local codes codes to standard vocabularies like LOINC, RxNorm, and SNOMED CT. But, all of these differences in source system approaches add up to lots and lots of variation. And the variation becomes more and more of a headache as the number of sites you’re trying to integrate goes up.
You’re never done
It sounds a bit depressing, but it’s true. You have to plan for managing terminology as a journey, not a destination. We studied the rate of change in source system terms within the INPC to help quantify the churn. You see, in the INPC, we would map all of a source system’s local terms before they would go live in the information exchange. But the work didn’t stop there.
Across many INPC institutions, in 2 years after go live we saw half as many new local terms appear as what we started with.
Specifically for radiology procedure terms, it was a bit worse. For radiology terms, in 2 years after go live we saw 71% as many new local terms appear as what we started with.
So, why all the change? Lot’s of factors combine to create this churn. For one, the diagnostic testing performed by the source systems may change over time, necessitating new codes. More commonly though, the source system codes may have changed. Even if the testing services stay the same, the codes can change when institutions by new equipment, change their lab or radiology information systems, merge or acquire other facilities, etc. Further, the standard terminologies themselves evolve over time, and best practice is to keep up with those changes.
Alright, so given these challenges, here are my Top 6 Tips for managing terminology in an HIE.
1. Not all juice is worth the squeeze
First, a few tests account for most of the result volume. In the INPC, we found that about 80 tests accounted for 80% of the result volume. Second, you’ll likely find that 70% of the tests take 30% of your mapping time (and vice versa).
You’ll also want to remember that not every variable will have a standard code. Within the master files of source systems you’ll often find codes for things that don’t make sense for exchanging with the outside world. They may be billing codes, internal quality control variables, etc. Many of these can be ignored from a mapping perspective.
2. Keep the end in mind
What are your goals? It’s possible that your goals for HIE services could be accomplished with a prioritized set of mappings. (Lab first? Radiology first? Only the top 80%?). But, remember that there are tradeoffs, especially as it relates to secondary uses of the data (research, quality improvement, reporting to public health, etc).
Regardless, you want to be clear on your priorities and goals with HIE so that your terminology approach matches your ambitions.
3. Work smarter, not harder
Because mapping local codes to standards is such a resource-intensive process, we continue to explore and investigate was aspects of it can be automated. While the nirvana of fully automating the mapping process remains not yet possible, there are some low hanging fruit.
For example, within the INPC, we found that 46% of new radiology terms were exact dupes of existing names! These institutions had different codes for the same exam performed at different sites. So, we built a process to bootstrap mappings for similar names from the same institution.
4. Don’t make assumptions
You can run into all kinds of trouble if you make assumptions when mapping. The names of source system terms are so often ambiguous. It’s tempting to make assumptions or “good bets” about what was really meant, but every time we’ve done that we’ve found exceptions that burned us later. Here are some of the ambiguities you might find:
- No indication of whether quantitative or qualitative
- Lab tests with no specimen identified
- Everybody knows this is always done on serum! (Guess what…we had one institution say that when then lab across the street said “everybody knows this is always done on plasma!”)
- Totally generic test names (e.g. Send out test)
- Miscellaneous serology, PCR, culture
The main point here is that you must know the specifics of what the test or variable you are trying to map actually measures. If you guess, at some point you’ll be wrong. Some ways to get the clarity you need is to ask local experts (i.e. the ones who perform the test) or review the package inserts for those tests. In addition, you’ll definitely want to get the reported units of measure for all quantitative tests. Getting your hands on sample results (for both quantitative and qualitative tests) can also be hugely informative and guide the mapping process. All of these things can be difficult to address if you are a downstream recipient of the data, but without them you’ll be shooting in the dark.
One last tip: don’t “throw away” information in local test names by mapping to more general terms from a standard vocabulary like LOINC. It’s tempting because it seems to make the mapping easier, but it’s a one way, lossy data transformation. Once you map to a more general term, you throw away that level of detail. Chances are, you’ll want it at some point. That’s why best practice is to map to the level of granularity you have specified in the local term name. You can always use the attributes of the standard to aggregate more specific codes later.
5. Stay up to date
Because no one standard vocabulary covers all types of health data well, in an HIE you’re typically going to be dealing with multiple standards. The typical “big three” are LOINC, RxNorm, and SNOMED CT. In addition, you may also need to use ICD and CPT.
LOINC is published in new releases twice per year (June and December).
Regenstrief and the LOINC Committee have asserted that best practice is to update to the current version within 90 days of its publication.
I’ve written previously about how to stay up to date with LOINC.
SNOMED CT is also published twice yearly as new releases of the International Version / U.S. Edition. The license from SNOMED International (formerly known as IHTSDO) requires prompt updating:
Within one-hundred and eighty (180) days after the Licensor has notified the Licensee of the release of a new version of the International Release, the Licensee must upgrade the version of the International Release in its own systems and in the Licensee Products to that new version.
RxNorm is published as a new full data set each month. The NLM also publishes weekly updates with newly approved drug information. The weekly updates are meant to be used in conjunction with the most recent full monthly release and any previous weekly updates for that same month. I haven’t found any licensing or best practice guidance about update frequency for RxNorm, but the pharmaceutical space moves fast and frequent updates are needed to stay on top of it.
Last, both CPT and ICD-10-CM are updated yearly, so if your exchanged content includes those codes, you’ll also need to keep up with their release schedules.
6. Try to evolve gracefully
One of the challenges in managing 100s (or thousands) of interfaces is gracefully handling “unmapped” test results that come across. Within the INPC, for example, source systems retain the control and maintenance over their codes. As a result, the HIE frequently sees brand new codes (or even changes to existing codes) appearing in the message streams without advanced warning. Because the codes are novel, there is no established mapping to a vocabulary standard.
To handle this evolving reality, we devised several strategies to deal with incoming unmapped tests in live message streams. First, we let these unmapped tests results flow into the repository so that they were immediately available clinical care. Within the unified virtual patient record, we simply displayed these unmapped tests under “misc test results” in the HIE viewer (called CareWeb in the INPC). Having unmapped tests in the shared repository is not ideal, so you will have to work out a mechanism for updating the mappings after the fact. This could be accomplished through applications or queries that interact with the data through terminology services, rather than, for example, making mapping updates at the raw instance level data in the database.
While we’re on the subject of updating mappings, I strongly recommend building a robust logging and history mechanism to track changes to your HIE terminologies and mappings. This is sort of common sense, but one of the best additions you can make is a field for recording the narrative “why” a change in mapping from one code to the other was made. You might not need it now, but trust me, you’ll thank me in 5-10 years.
At the HIE level, if you care about data quality (who doesn’t?) you’ll also want to be vigilant for changes in the source system coding. Some things to monitor (i.e. build automated tools to detect) are changes to the test name (description), changes to the reported units of measure, and new codes that have the same test name. All of these things can be hints that the source system isn’t following good terminology management practices. Over time, these practices can really muck up your data.