Monday, July 5, 2010

Leapfrogging CPOE

Last week, yet another alarming Computerized Physician Order Entry (CPOE) study made headlines. According to Healthcare IT News, The Leapfrog Group, a staunch advocate of CPOE, is now “sounding the alarm on untested CPOE” as their new study “points to jeopardy to patients when using health IT”. Up until now we had inconclusive studies pointing to increased and also decreased mortality in one hospital or another following CPOE implementation, but never an alarm from a non-profit group who made it its business to improve quality in hospitals by encouraging CPOE adoption, and this time the study involved 214 hospitals using a special CPOE evaluation tool over a period of a year and a half.

According to the brief Leapfrog report, 52% of medication errors and 32.8% of potentially fatal errors in adult hospitals did not receive appropriate warnings (42.1% and 33.9% accordingly, for pediatrics). A similar study published in the April edition of Health Affairs (subscription required), using the same Leapfrog CPOE evaluation tool, but only 62 hospitals, provides some more insights into the results. The hospitals in this study are using 7 commercial vendors and one home grown system (not identified), and most interestingly, the CPOE vendor had very little to do with the system’s ability to provide appropriate warnings. For basic adverse events, such as drug-to-drug or drug-to-allergy, an average of 61% of events across all systems generated appropriate warnings. For more complex events, such as drug-to-diagnosis or dosing, appropriate alerts were generated less that 25% of the time. The results varied significantly amongst hospitals, including hospitals using the same product. To understand the implications of these studies we must first understand the Leapfrog CPOE evaluation tool, or “flight simulator” as it is sometimes referred to.

The CPOE “simulator” administers a 6 hours test. It is a web based tool where hospitals can print out a list of 10-12 test patients with pertinent profiles, i.e. age, gender, problem list, meds and allergy list and possibly test results. The hospital needs to enter these patients into their own EHR system. According to Leapfrog, this is best done by admission folks, lab and radiology resources and maybe a pharmacist. Once the test patients are in the EHR, the hospital should log back into the “simulator” and print out about 50 medication orders for those test patients, along with instructions and a paper form for recording CPOE alerts. Once the paper artifacts are created, the hospital is supposed to enter all medication orders into the EHR and record any warnings generated by the EHR on the paper form provided by the “simulator”. This step is best done by a physician with experience in ordering meds in the EHR, but Leapfrog also suggests that the CMIO would be a good choice for entering orders. Finally, the recorded warnings are reentered into the Leapfrog web interface and the tool calculates and displays the hospital scores.

If the process above sounds familiar, it is probably because this is very similar to how CCHIT certifies clinical decision support in electronic prescribing. Preset test patients followed by application of test scripts are intended to verify, or in this case assess, which modules of medication decision support are activated and how the severity levels for each are configured. As Leapfrog’s disclaimer correctly states, this tool only tests the implementation, or configuration, of the system. This is a far cry from a flight simulator where pilot (physician) response is measured against simulated real life circumstances (busy ED, rounding, discharge). The only alarm the Leapfrog study is sounding, and it is an important alarm, is that most hospitals need to turn on more clinical decision support functionality.

It is not clear whether doctors will actually heed decision support warnings, or just ignore them. Since the medication orders are scripted, we have no way of knowing if, hampered by the user interface, docs without a script would end up ordering the wrong meds. And since the “simulator” is really not a simulator, we have no way of knowing if an unfriendly user interface caused the physician to enter the wrong frequency, or dose, or even the wrong medication (Leapfrog has no actual access to the EHR). We have no indication that the system actually recorded the orders as entered, subsequently displayed a correct medication list or transmitted the correct orders to the pharmacy. We cannot be certain that a decision support module which generates appropriate alerts for the test scripts, such as duplicate therapy, will not generate dozens of superfluous alerts in other cases. We do know that alerts are overridden in up to 96% of cases, so more is not necessarily better.
Do the high scoring hospitals have a higher rate of preventing errors, or do they just have more docs mindlessly dismissing more alerts?

All in all, the Leapfrog CPOE evaluation tool is a pretty blunt instrument. However, the notion of a flight simulator for EHRs is a good one. A software package that allows users to simulate response to lifelike presentations, and scores the interaction from beginning to end, accounting for both software performance and user proficienby, would facilitate a huge Leap forward in the quality of HIT. This would be an awesome example of true innovation.

No comments:

Post a Comment