Using Enterprise Data to Preempt Harm

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
 - Kevin W. McEnery, MD
Kevin W. McEnery, MD Diagnostic Radiologist; Director, Innovation in Imaging Informatics, MD Anderson Cancer Center, Houston

If Kevin McEnery, MD, has one message for radiology, it is this: To improve service within the radiology department, you must have access to data from outside the radiology department. His admonition, however, is to make your interventions preemptive: “I do firmly believe that the best way to predict the future is not necessarily to create it, but to change it; you can either run from it or learn from it.”

Professor of radiology and director, innovation in imaging informatics, University of Texas MD Anderson Cancer Center, Houston, McEnery provided an inspiring—and game-changing—talk about his quality assurance work during the 2014 meeting of the Society of Imaging Informatics in Medicine in a session on “Quality Improvement Harnessing Informatics.” He outlined how to leverage enterprise data (and workflows) to drive change in the radiology department, and demonstrated how the coordination of scheduling activities could improve radiology performance and interactions with clinicians.

RIS/PACS was the first information system with enterprise image distribution capability in most institutions, but it has been usurped by the EMR, McEnery reminds. “The clinician’s workflow is not going to be what the radiology department wants it to be—it’s going to be the pattern that is driving workflow through the EMR,” he emphasizes. “Order entry will not occur in the RIS or PACS, it will occur in the EMR. Results notification will not be a radiology-based system, it will be an institutional-based system.

“That also opens up opportunities,” he adds, “because then you have centralized clinical information that you can in turn leverage for clinical workflow.”

While retrospective analytics is about yesterday, real-time analytics is about today, and predictive analytics is about what will happen in the future, McEnery favors preemptive analytics coupled with automated interventions that can change workflows before they have a chance to damage the patient or change the patient outcome. He calls it “proactively changing the patient outcome based upon informatics in real time occurrences.”

Tools, processes, and targets

There’s no magic bullet, when it comes to process improvement tools, he says. You will require databases and data, but not directly from your production systems. “You need data that is replicated, because you can create some really nasty queries that can wipe out your RIS system,” he advises.

It is important to understand and keep in mind the timeliness of the replication when considering a specific project. “That’s important, because if you have something that needs to be changed real time, a database that is replicated over night cannot be used for that purpose,” he says. “The timeliness of the data will actually impact upon the things you can impact and improve.”

McEnery advises doing an Internet search to find tools for analysis, such as dashboards. “There are a lot of them out there, you do not have to create this from scratch,” he says.

It is very important to have a method to distribute results. “If you are doing this work. and you are the only one who knows about these results, that is not going to change anything,” he says.

Whether you are looking within the institution or outside the institution, McEnery advises using the following methodology:

  • Identify the issue you are trying to address.
  • Define a metric to track the issue and set a specific goal to change that process.
  • Try an intervention to improve the issue.
  • Be prepared to do multiple interventions and track each of them.
  • Implement the intervention and automate the process, so that you aren’t saying, “Oh yes, I forgot to run that query,” he advises.
  • Continually monitor the metric.

Finally, when trying to determine targets for change, just listen, McEnery suggests. “When you are in a meeting and someone says, ‘We have to hire more FTEs to prevent a future patient safety incident, that’s something that needs to be changed,” he says.

Case study: Overtime

Another trigger to listen for is overtime. “Schedulers are working ‘overtime’ to update appointments: That was a buzzword for me to say something was wrong at our institution,” he shares, prompting a process-improvement project that involved accessing real-time enterprise data, preemptive analytics, and real-time intervention.

MD Anderson transitioned to CPT-based scheduling, meaning that two slots were generated rather than one for, for example, CT, chest and CT, abdomen/pelvis. For patients who required imaging of multiple body parts, schedulers were generating two separate appointments rather than one. Because the institution has multiple outpatient imaging sites, patients were sometimes scheduled in two different locations for those studies on the same day.

Aside from the extra work that ensued for schedulers in rescheduling patients, two protocols rather than one had to be generated, because MD Anderson has an electronic protocoling system. “The inefficiency meant that the coordinators were working overtime to try to clean up the issue,” he says.

Initially, McEnery had difficulty defining a metric because he didn’t know how many appointments there were. The goal that was set was to eliminate split appointments.

“It turned out that the problem was a lot worse than we thought: It was up to 8% of appointments on given schedule days,” McEnery reveals. “When you are expecting 359 patients to arrive and 32.9 are basically no shows, that’s a problem. Think of an airline running with what it thinks are full planes, and it turns out that 9% of the seats are going to be empty. That’s not a good business case by any means.”

Schedulers were trained on how to properly schedule when the system was implemented, but by October, things were out of hand. McEnery’s team was able to find the issue and track the issue, but it didn’t change the issue. “What changed the issue was an IT solution,” he says. “It took a little bit of time, because we didn’t have real-time access to scheduling data.”

Once they did get access, they were able to real-time identify when a coordinator created a dual appointment. An automated message was created that would be sent to the coordinator within three minutes, alerting them to the double appointment and instructing them on the corrective action that needed to take place.

“We went from 8% to zero,” McEnery reports. “The key was those messages. Not penalizing, not training everyone, but focusing on those people who were having the issue. We haven’t stopped, this message gets done every day so the process isn’t cured, but it is well under control, and we are running at 0% going forward.”

Locating data sources, metrics, solutions

McEnery asked the audience: What are the two most important data sources for quality assurances? Is it radiology reports? Is it study metadata? Is it PACS data? DICOM data? EMR orders? Is it the schedule procedure data? The metadata from the voice recognition system? Orders? Allergies? Medications?

It was a trick question, it turns out. “The most important data source is not one thing, it’s the data source that is necessary to answer the question you are trying to achieve,” McEnery emphasizes. “The most important data source is the data source you need to inflect the process that needs to be improved.”

Of course, the more data sources you have, the more arrows in your quiver, and the easier it is to change workflows and move forward with process improvement projects. To improve quality and safety, further steps are required.

“You need to implement processes that are intended to provide the desired outcome; you need to measure the outcome; and (in my opinion) the most important thing you need is to implement procedures and processes that prospectively detect anomalies in the desired process and intercede prior to negative consequences,” McEnery says. “If you have a process that results in ten negative consequences per day, every single day, it’s great to measure that, but it’s more important to fix it so that the negative outcome doesn’t happen to the patient.”

Take care with metrics, he cautions, calling out the favored but blunt turnaround time. In McEnery’s institution—a luminary cancer center in which up to 800 CTs must be read in a day prior to appointments—the more important metric is this: Was the report available when the doctor needs to make the clinical decision?

Consequently, workflow is optimized to that end and the radiology’s worklist is primed to have the report ready for the patient’s appointment. Not that speedy turnaround time is unimportant: 40% of patients have an appointment less than 24 hours from the time the exam is completed.

Finding opportunities

At MD Anderson, where patients tend to be quite ill, the radiology department recognized that many admitted patients have a pending outpatient exam scheduled. Not only is the exam necessary for ongoing protocols, but the ordering physician may be unaware that the patient has been admitted to the hospital.

The radiology department’s solution was to scan for scheduling conflicts and send messages to clinicians every morning: “Mrs Smith was admitted to the hospital last night, she had an outpatient CT scan scheduled for tomorrow, would you like us to do the CT scan for you, or would you like us to cancel the scan so that we can open the slot for another patient?”

Invariably, the clinician’s response is to cancel the patient, and many times, they copy their staff and ask them to schedule the patient’s clinical appointment. “A few minutes later, the whole process is done, and magic happens,” McEnery says.

This has happened 4,780 times since the automated process was put into place, he adds. “I have never gotten an email from a clinician saying, ‘Why are you bothering me with this information?’ The opposite is true. They are very happy. They see me in the hall and they say, ‘Thank you, that really made a difference for the patient.’ If you give clinicians information that is actionable, they are appreciative.”

If resources (data, funds) are needed for a project, measure the costs of the system failure and share it with hospital executives, McEnery suggests. “You will get resources to do more cool things, because this was worth more than $10 million in revenue,” he says. The value of that particular project is in fact greater now if the number of slots opened up for other patients is factored in, as well the value of continuing the patient’s care in the hospital.

“The clinical benefits are obvious,” McEnery says. “The studies get done, the attending physicians are happy, it gives a high level of clinical satisfaction with the process, and our no-show rate is decreased. It’s a win-win all the way around.

In conclusion, McEnery says that access to the enterprise data warehouse will provide many opportunities for process improvement. “These types of solutions are going to become more common, because we do have access to EMR data, and we do have access to scheduling information,” he says.

Implementation of an EMR should provide ready access to clinical information, hospital workflow, scheduling, and census data. “Prospective interventions are an aspect of process improvement: solve the problem before it becomes a problem,” he suggests.