Wednesday, April 28, 2010

Top Ten Characteristics of a Software Safety Culture

This is a follow-on to my previous post regarding the safety issues of EHRs…. But first, one comment and point of emphasis before I dive into the Top Ten details: Tightly-integrated, monolithic EHR solutions that rely on a single data model are much less prone to safety risk scenarios than those associated with loosely integrated, best-of-breed HL7-based systems. That’s my personal observation-- no lengthy study required to prove it to me. There are, of course, other drawbacks to purchasing a monolithic EHR from a single vendor, but from a safety standpoint, they are a better choice than buying disparate pieces and trying to glue them together with HL7. The unfortunate downside to HL7 is its fragility—errors are frequent and occur easily-- leading to mismatched patient records, delayed delivery of vital results, and lost delivery of the same. I’m not a fan of message oriented architectures (MOA)—never have been-- but especially so in healthcare where HL7 has made it insidiously easy to hang onto old school, unreliable MOA designs.

Now on to the Top Ten…

My information systems and software development background started in the U.S. Air Force, an environment in which software safety (and security) is an embedded part of the culture. Military weapons control and targeting systems, flight control systems, and intelligence data processing systems are all developed with an inherent sense of safety. That background predisposed me towards an interest in the subject, especially so as I watched software and firmware play a larger and larger role in those areas of our lives that were once controlled by levers, gears and links. I have a gloomy sense of self-satisfaction in knowing that I predicted-- and tried on numerous occasions to affect the prevention of—the software system failures that affect many of our computerized voting systems and now seem also to affect our automotive control systems. I find myself in yet another Groundhog Day moment with this topic and EHRs.

As described by Debra Herrmann in “Software Safety and Reliability”, software is safe if… (I added the points of emphasis):

1. It has features and procedures which ensure that it performs predictably under normal and abnormal conditions

2. The likelihood of an undesirable event occurring in the execution of that software is minimized

3. If an undesirable event does occur, the consequences are controlled and contained

Below are the Top Ten Characteristics of a Safe Software Environment, at the CIO-level, based upon my 20+ years of watching and studying this topic. There are numerous lower level programming techniques that I might cover later, but these are the top characteristics that a healthcare CIO can and should directly lead. If you were a Software Safety Auditor for ONCHIT, CCHIT, HHS, or the FDA, you could use this checklist as a preliminary assessment of an EHR vendor or healthcare organization undergoing an EHR implementation.

1. Does the organization have an internal committee to whom software safety concerns and events can be escalated and reported?

2. Is there an Information Systems Safety Event Response process documented and well-known in the culture?

3. Is there a classification and metrics system in-use for categorizing and measuring the events and risks associated with software safety? Such as:

Type 1: Catastrophic. Patient life is in grave danger. The probability for humans to recognize and intervene to mitigate this event is very low or non-existent. Intervention is required within seconds to prevent the loss of life or major body function.

Type 2: Severe. Patient health is in immediate danger. The probability for humans to recognize and intervene to mitigate this event is low, but possible. Intervention is required within minutes to prevent serious injury or degradation of patient health that could lead to the loss of life or major body function.

Type 3: Moderate. Patient health is at risk. However, the probability for humans to recognize and intervene to mitigate this event is probable. Intervention is required within hours or a few days to prevent a moderate degradation in patient health.

Type 4: Minor. Patient health is minimally at risk. The probability for humans to recognize and intervene to mitigate this event is high. Corrective action should occur within days or weeks to avoid any degradation in patient health.

4. Are software applications, functions, objects, and system interfaces classified and tested according to their potential impact on safety risk scenarios? Such as:

a. Data which appears timely and accurate, but is not
b. Data that is not posted—incomplete record
c. Valid data that is accidentally deleted
d. Data posted to the wrong patient record
e. Errors in computerized protocols and decision support tools and reports

5. Do software developers follow independent validation and verification (IV&V) of their safety-critical software?

6. Is there a process of certification for programmers who are authorized to work on safety-critical software?

7. Are fault tree and failure modes effects analysis an embedded element of the software development and software safety culture?

8. Is scenario-based software testing utilized?

9. Are abnormal conditions and/or inverse requirements testing performed?

10. Is there a more risk-averse software change control and configuration management process in-place for safety-critical software?

In sharing these Top Ten Characteristics, based upon my experience, it is an attempt to contribute to the solution, not just complain about the problem. If you have any thoughts, suggestions, or personal lessons, I’d enjoy hearing from you.

Tuesday, April 27, 2010

Displaying the Status of the EHR vs. The Patient

A CIO colleague recently shared with me that her EHR team runs a routine that balances the clinical results that are sent out against those that are posted to the EHR. I gather from her message that the results of that reconciliation are used by her team to manage the environment, but not shared with clinicians in some sort of a data quality dashboard.

In the rest of our lives, there are examples everywhere in which we display the status of the system to the operator-- automobiles, pilots, manufacturing systems, transportation system, et al. In healthcare we display the status of the patient via the EHR, but we don't display the status of the EHR to the operator-- that's the system status that's missing in this healthcare process-- the status of the data quality (completeness and validity) in the EHR is the unknown that leaves clinicians not knowing what they don't know.

We need some creative ways to display the status of the EHR's data quality to clinicians, such as:

--Average delay in posting results to the EHR
--Likelihood that "this patient" is a duplicate record
--Average daily backlog of HL7 messages per patient in the master patient index

Tuesday, April 20, 2010

Patient Safety and Electronic Health Records

Below is a discussion motivated by four sentinel events which were root-cause attributable to electronic health records (EHRs) for which my teams and I were personally responsible.

Remember when safety belts in automobiles first became popular? They were simple lap belts, no shoulder strap. Did they aid passenger safety? Yes, in some ways… but they also introduced the danger of a whole new range of injuries, such as lumbar separation and paralysis, which hadn’t previously existed. It wasn’t until we added shoulder straps and the three point anchor to seat belts in cars that the evidence of benefit to passenger safety became clear and without question. We need pause now and add shoulder straps to EHRs.

Recent discussions at the national level about the patient safety effects of electronic health records are encouraging. The Wall Street Journal, Huffington Post, ONCHIT, Congress, and the FDA are starting to pay attention. But I laughed sadly and out loud when I read that the FDA estimated 44 patient deaths in the last two years “might be” attributable to the use of EHRs. That number is lowly inaccurate and also ignores all the other undesirable patient safety risk scenarios, besides death, that deserve attention.

Speaking from 13 years in the trenches of EHR-use at a variety of healthcare organizations, and applying a dash of country-boy common sense, I don’t need a fancy and sophisticated, multi-year analysis to tell me that the impact is orders of magnitude higher—it’s 100 times higher, at least. There are unintended consequences of EHRs upon patient safety every day in every hospital that uses an EHR.

Here’s a brief discussion of only four of the many patient safety events that happened under my watch, that were directly attributable to an EHR.

Missed diagnosis for soft tissue sarcoma: In this accident, a soft tissue sarcoma went undiagnosed for at least 3 months, possibly as long as 6 months, because the radiologist’s report from the Radiology Information System failed to file properly in the EHR and the referring physician didn’t know what they didn’t know and failed to follow-up on the results of the scan. The young mother of three went untreated, the cancer spread to the point that it was untreatable, and she died. The case was settled out of court for several hundred thousand dollars.

Acute Renal Failure from lack of lab results: The patient was administered a contrast dye in preparation for an imaging procedure, even though a recent lab test revealed compromised kidney function. The lab result failed to file properly in the EHR and the ordering physician didn’t know what they didn’t know. The patient suffered Acute Renal Failure and the case was quietly settled out of court for several hundred thousand dollars.

Missed and inappropriate diagnosis for Congestive Heart Failure: The echo report from the cardiologist clearly indicated that the patient was suffering from a very low LVEF, but the report did not file to the correct patient record—it filed to the wrong patient record. The correct patient went untreated and eventually died. The incorrect patient received aggressive medication treatment for CHF when that treatment was unnecessary. The case was quietly settled out of court. The CHF patient’s family was given a seven figure settlement, with a requirement to keep the incident quiet. The other patient’s family was given an apology.

Infant death: A young, single and non-English speaking mother was in hard labor and her baby under duress, but the Application Programming Interface (API) failed which had been developed to capture contraction waveforms and fetal heart data and then display those for remote viewing in the EHR. This software error also caused the fetal monitor to stop generating local audible alarms. The nurses were left unaware of the patients’ condition and the treating OB/GYN, who was monitoring the patient from his home via the patient’s EHR, was left unaware. The young and frightened mother continued quietly through her labor, giving birth in the hospital bed, unattended. Her baby was born brain-dead and later died due to umbilical strangulation.

There is reasonable but not widespread or entirely tangible evidence that EHRs can reduce patient safety events — drug-drug interactions being the most obvious — though if you’ve seen the way the vast majority of drug-drug interaction alerts function in EHRs, those alerts are of little value to physicians the vast majority of the time and are generally ignored.

Though EHRs might reduce some patient safety events, I’ve seen first-hand that the use of EHRs can also introduce new categories and types of events that we cannot ignore in the rush to deploy them across the United States. We have enough experience and evidence with EHRs to pause and do better with their impact on patient safety going forward. It doesn’t take a rocket scientist’s analysis to make immediate and major improvements to this environment, and we don’t need to wait until 2013 to develop a national database to simply begin ‘measuring’ the problem.

We need to implement mandatory reporting of safety-related EHR events — no more back-room, secret meetings and out-of-court settlements — what if we allowed the FAA to function in safety secrecy like that after a plane wreck? And we need to implement immediate requirements for auditable best practices in the development and deployment of EHRs, especially in the form of rigorous testing (for example, inverse requirement testing, as is common in military and nuclear power systems) and independent verification and validation of software code, including that associated with HL7 interfaces.

And we need to do it today so an electronic system is never responsible for doctors not knowing what they don’t know.

The Elimination of Performance Reviews

I've seen at least a dozen different employee performance evaluation systems in my career--all well-intended-- but they generally do more harm than good. Many companies are eliminating them altogether. At least one company decided recently to change them to "Employee Appreciation Reviews"-- only positive comments allowed.

My logic for their upheaval is, if you see yourself as a great leader and manager, then you will only have great employees on your teams. If you have something less than great employees, then that means that you are retaining people that you shouldn't-- and therefore, you are not a good leader nor manager and you shouldn't be in the job.

Therefore, if you have only great employees on your teams, then the performance review process is simply a formal way of telling people how much you appreciate them and building their self-esteem. It also provides them with a document that they can carry with them for reference if and when they decide to take their career elsewhere. Ninety-five percent of the world will rise to the level you treat them and believe in them. Using the performance review process to grade human beings as if they were beef cattle or chicken eggs is demeaning and incredibly old-school. You should either eliminate them altogether or use them to build self-esteem. If you need to give the boot to an under-performing employee, there are other better ways of handling and documenting that process-- letters of warning, etc.

This article, below, from the Wall Street Journal, summarizes a very good book on the topic and one that I endorse 100%.


SpaceX Inspirations

SpaceX launched a two-astronaut crew yesterday, on a mission to dock with the International Space Station. It was the first human spaceflig...