The CMO–CIO Partnership: Improving clinical quality through operational efficiency, Part II

Central to vRad’s mission are two concepts currently driving healthcare reform: improving clinical quality and achieving operational efficiency. Through the exceptionally tight partnership of vRad Chief Medical Officer, Ben Strong, MD, and Chief Information Officer, Shannon Werb—explored in Part I of this interview—vRad has achieved a synergistic melding of those concepts that truly are driving clinical innovation, as evidenced by the discussion with Strong and Werb continued below.

Q. Let’s talk about clinical quality.  Where are we in the continuum of leveraging IT, and where do we need to go?

Strong: In the radiologic literature, administrative entities in radiology often try, I think, to obfuscate the issue of quality.  

Quality in radiology simply comes down to accurate interpretation.  It doesn’t need to be pushed and shoved into a mathematical equation with multiple and complex variables.  In fact, quality is just not nuanced in the way that some in our specialty would like us to believe.  At vRad, quality is a critical, yet simple issue: is there a significant abnormal finding on the study and did the radiologist identify it and report it — that’s it. 

We are in a particularly advantaged position with regard to quality at vRad.  First, our practice grew from providing preliminary studies, all of which are over-read by a second radiologist the following day. That means a second set of eyes on all of our reports—and direct and immediate feedback on the quality of our interpretations.  In addition, we offer a very responsive quality assurance committee and an online portal for our client sites to submit any perceived discrepancies in interpretation for both preliminary and final reads—our technology tools allow us to process those in an efficient manner and turn the information into insight and data that we use internally to improve our clinical performance—and to share with clients to ensure transparency.

Q. Would you describe how that works?

Strong: Highly trained QA committee members interact with vRad’s integrated QA module, allowing them to review cases with discrepancies and make sure that the radiologists involved review the cases as a first step. Committee members compile all of the input from multiple sources and ultimately code the case according to a variety of parameters so that, just at a glimpse, we can see exactly what the essence of that discrepancy was and develop a process to address it specifically for the impacted client—and in general for our overall practice management. 

Based on this initial model, we have compiled that data and leveraged it for a decade. On the clinical side, we use it to direct educational efforts for the radiologist, both individually and across the practice. We look at workflow elements and other factors that might be contributing to quality: time of day or night, beginning or end of shift, specific modality types.  

All of those things are available to us and allow us to customize and tailor our educational and efficiency advancements to address perceived gaps in quality.

Q. Who provides the continuing education—who reaches out to the individual radiologists?  Is the process automated or manual?

Strong: It's highly individualized and human, but it is enabled by technology.  When a client site submits a discrepancy, it's placed in a digital QA queue and an alert is automatically sent to the radiologist who initially interpreted the study.  The system requires that they review the images and submit their impression of the conflicting opinion.  That content is then placed in the QA queue.

Next, a member of the QA Committee opens the case. We have QA committee members that have been serving in that role for years, are vigilantly overseen by our director of QA, and have been trained in the coding and analysis of QA data. We have a unique amount of internal validity based on the rigorous training of our QA committee members: They are very experienced; they are very consistent.

The reviewing physician will open any given case, review the images, review the report submitted by the client site, review the input from the radiologist that initially interpreted the study and ultimately come to their own conclusion, informed by facts and longstanding experience reviewing QA cases.  The final QA disposition is coded using a variety of parameters in our database. 

If a discrepancy is found, the case gets both a severity grade reflecting the potential effect on patient care and a grade regarding the visual conspicuity of the miss. They will grade whether or not the clinical history should have directed the radiologist’s attention to the finding, and they will grade it according to what type of error. Was it a missed finding, a misinterpretation of a finding that was made or a variety of other things, including typographical errors and other potential technology contributions to the issue?

Q. Those are the parameters that you referred to earlier?

Strong: Right. We have those parameters coded consistently by highly trained QA committee members that represent their specialties going back 10 years. We can review a given radiologist’s performance according to type of error, modality, body part, specialty status, time of day, segment of shift, years of experience, and productivity to name just a few things. 

Every radiologist is required to review every discrepancy submitted on their work, and every radiologist receives a yearly summary of all QA data with a numeric ranking in the practice.  We review the lowest decile in great depth on a quarterly basis and are able to take corrective action with extreme outliers, including performance improvement plans.

Q. Shannon, which IT achievement has had the greatest impact on clinical quality in the vRad environment?

Werb: As manual processes become automated, those processes begin to throw off data. We are now using the data we collect as part of decision-making around not only quality improvement for the radiologists, but staffing across our client base.  The data is there, the system is actually integrated into our workflow, clients can interact with it, our QA committee can interact with it, and our medical leadership can interact with it.

We are going to begin studying it from a technology-integration perspective.  Leveraging structured reporting, leveraging efficiency of the radiologists, leveraging various components of our technology drives a different experience from a quality perspective.  So we are beginning to match together and aggregate a tremendous amount of information from our repository to help us drive more decisions about the quality program that we have at vRad.

We are even taking it a step further.  We plan on integrating natural language processing, helping us to understand individual radiologists’ reporting style.  

For example, if we look at the aggregate of all of the interpretations we have performed, then learn about the radiologists’ use of diction like hedging phrases, as an example, perhaps that would give us information we could use to help better train them moving forward. 

We are going to start to use analytics and the data in an aggregated fashion across our practice to continue to drive improvement from a quality perspective.

Q. What is your proudest achievement in driving clinical quality improvement through operational efficiency? 

Werb: So Ben, do you want to take one of the 20 or so projects we have worked on in the last year or two, maybe trauma, as an example?

Strong: Trauma studies are obviously emergent cases—frequently emergency–room-type cases that need rapid turnaround time, accurate diagnosis and highly efficient communications with our client sites.

When Shannon first came on and we were beginning our partnership, I always said that if I was starting a teleradiology practice today dedicated to emergent studies, the first thing I would do is make absolutely sure that I had a rock-solid process for addressing acute trauma patients. 

These studies are complex, they frequently involve CT scans of the entire body— the head, the face, the cervical spine, the chest, the abdomen and the pelvis.  The two competing issues we had were that our system is made to bundle the studies on a single patient to one radiologist, and typically, that is a desirable state of affairs—since the one radiologist is familiar with the entire clinical history. When there is a head and the neck CT scan, for instance, it makes sense for that one radiologist to read both.

However, when you are talking about a trauma and the bundling of potentially six or seven different anatomic CT scans to a single radiologist when turnaround time and accuracy is of the essence, it works against you. Essentially, the report on the head CT is not submitted until all of the scans are read, and one radiologist can only work so fast.  So while the one radiologist is interpreting all of the images and creating his or her report, the referring physician is waiting—and, more importantly, the patient is waiting for treatment.

Q. How did you improve the process?

Strong: We created a system whereby the site will specifically designate, through our online order management system, that a study is a trauma case. In that case, we have built a patent-pending trauma protocol workflow that separates the studies at a line drawn at the clavicles. Everything above that line—cervical spine, head, face, orbits—goes to one radiologist. Everything below that line—chest, abdomen, pelvis, thoracic spine, and lumbar spine—goes to another radiologist.  They are interpreted in parallel.

Both radiologists are given automated alerts to the effect that other radiologists are working on studies from that same patient, so that they can collaborate via online chat or telephone if they find any abnormalities that bridge the line we have drawn.  The communication to the site is automated so that on the site’s request, when they designate a study as a trauma, an automated call is generated to that site so that reports are communicated, whether they are positive or negative.  That would be the operational aspect.

We do this type of automated and mandatory communication for both trauma and stroke protocol cases. Often, a trauma team or a stroke management team is being assembled onsite to adhere to specific algorithmic protocol treatment of that patient. Many of their decision points hinge on whether or not a study is positive or negative.  They need to know because they are poised at a branch point in their algorithmic management of the patient.

Werb: Just to complement what Ben is saying, he was very passionate about the creation of this service, but I had people coming to me, asking, “How is this different than stroke or stat, we already have those workflows, why are we doing this?” Part of our job was educating them around the importance of unbundling and of speaking a specific language back to the trauma team that is being assembled on site to deal with the patient.

Having our system integrate into the trauma site workflow in a way that they are familiar with was exceptionally important. We had to get our teams around that and over the hump of, “Maybe, Ben is just crazy, and he is just creating work for us.”

Strong (laughing): Really?

Werb: Maybe I’m exaggerating a little. What are we, Ben, three or four months post rolling this out? We have rolled this out to all of the trauma facilities that we cover, which is hundreds across the United States. We have demonstrated that we are now delivering results from the same types of procedures these facilities have been sending to us all along, in some cases reducing overall turnaround time by 70 percent and providing a much higher quality. Now we have two subspecialized radiologists reading the unbundled studies versus one. Providing that result back to the client faster has been a huge success for the organization—and for our patients. 

Everyone sees it as obvious, but we had to work together across the organization with all of the teams, not just IT. We had to market this, communicate this with our clients, train them and change our operational workflow internally. This was a cross-functional rollout of a feature that came from the chief medical officer.  It was much broader than just changing the way our radiologists read the study.

Q. Any concluding thoughts?

Werb: We have the CMO project team, along with multiple other dedicated project teams, assembled in 2015, and we are focused on very needy items, considering quality and efficiency and combining that with what’s happening as healthcare policy changes, for example, the transition to ICD-10. This team is focused on projects that are going to make that successful, not just because the government says you should.

We believe that by taking this approach, we are going to provide better care for our patients, we are going to be more efficient in providing that care and then we are going to have the data we need on the back end to continue to drive improvements back into the business. I think that the collaboration between technology and clinical leadership will continue to be extremely important as we move forward.

Strong: I think that is very well put. We’ve really honed in on our 2015 projects by allocating the resources and mapping the specifications. The CMO–CIO partnership was integral to our success in 2014 and will continue to raise the standard for quality in radiology.

Cheryl Proval,

Vice President, Executive Editor, Radiology Business

Cheryl began her career in journalism when Wite-Out was a relatively new technology. During the past 16 years, she has covered radiology and followed developments in healthcare policy. She holds a BA in History from the University of Delaware and likes nothing better than a good story, well told.