Outcomes Metrics: The Ultimate Quest for Quality in Radiology

The U.S. is transitioning to a system encompassing value-based reimbursement, and, as such, has created a need for the development of metrics to quantify healthcare quality. Along the way, it is becoming more apparent to many radiologists that the definition of quality is often rooted in the patient outcomes achieved. Measuring outcomes in radiology is no easy process.

How outcomes are defined is important, but they should be defined “broadly and as something more than just a binary patient survival curve,” says Kevin McEnery, MD, director of innovation in imaging informatics and professor of radiology at the University of Texas M.D. Anderson Cancer Center in Houston. For example, when it comes to mammography, there are direct relationships between patients, mammograms and the number of found cancers.

“However, in a lot of situations, that binary relationship is going to be a lot more difficult to tease out,” he notes, necessitating a broader definition. Did the patient receive appropriate care? Was the study the patient received optimized according to the clinical presentation? Was the patient’s disease process changed because of it?

“The question in general is how do you create metrics that can appropriately align with outcomes?” McEnery asks. “There are instances when it’s relatively easy to measure something, but whether there is a quid pro quo relationship to an outcome—that causality is often difficult to come up with.”

The hunt is on

Although challenging, finding ways to measure radiological outcomes is not just an option any longer, says Mitchell Schnall, MD, PhD, chair of the department of radiology, University of Pennsylvania, and group co-chair of the ACR Imaging Network (ACRIN). “In thinking about how to value radiology services, it is becoming more important to figure out how to measure what the outcomes are,” he agrees. One of the problems, he points out, is that there are a number of ways to define outcomes.

For example, there is the radiology outcome—the test result that is recorded in a radiologist’s report—that Schnall says is often provided in unstructured text and is not always straightforward. “Then there’s the patient outcome, which is even more challenging,” he says, “because that includes an interpretation of the unstructured text of the communication piece, plus the practice pattern of the person who ordered the test, which may be variable as well. A lot of variables contribute to patient outcomes.”

Radiology outcomes can be difficult to measure simply because of the difficulty inherent in mining data from unstructured reports. Even with respect to outcomes that are structured or coded, Schnall says, their impact on patient outcomes is going to rely on how that information is ultimately used by the clinician.

“If you have a well-developed set of practice guidelines and practitioners follow them, and the radiology outcomes are integrated into those practice’s guidelines, then you ought to have a measurable response,” he explains. “The problem is that we aren’t there yet. Individual practitioners make decisions and those decisions may or may not be based on the latest evidence in the literature from the radiology journals or even in their specialty journals.”

With the latter problem in mind, the only way in which it will be possible to measure outcomes will be with well-developed and strict adherence to practice guidelines, Schnall says. As for the problem of unstructured radiology reports, “that problem is in our house, under our control, and we can solve it.”

A paucity of outcomes-based metrics?

In a recent study published in the Journal of the American College of Radiology, Anand Narayan, M.D., PhD, Johns Hopkins Hospital’s department of radiology, Baltimore, Md., and colleagues looked at the issue of quality measurements as they pertain to radiology by reviewing the literature and surveying radiology benefit managers (RBMs).

What Narayan and his colleagues found is that while many imaging quality metrics relate to structure and process, relatively few (27%), relate to outcomes. The lack of outcomes-based measures was even more pronounced when it came to RBMS, with structural metrics making up about 77% of all metrics used by RBMs, while outcomes-based metrics came in at just 17%.

Narayan identifies two issues behind the specialty’s slow movement toward outcomes measures. First, imaging studies are the drivers behind much of clinical decision-making, which means there’s going to be a lot of focus on process- or structure-based metrics, such as whether radiologists are using good quality scanners, turning around radiology reports as quickly as possible or are getting their patient’s scheduled on a timely basis.

The other issue, he says, is that from a practical standpoint a lot of measures are relatively easy to collect, such as those related to false positives or accuracy. “But a lot of the measures don’t necessarily do a great job of capturing our day-to-day performance, and, if we make mistakes, how we learn from those mistakes.”

That said, Narayan and colleagues determined that there have been a number of imaging metrics that have been developed that do evaluate outcomes. For example, peer review has become one of the most frequently cited outcomes metrics when used to evaluate diagnostic accuracy by capturing discrepant radiology interpretations.

The challenge in measuring these statistics, they wrote, “lies in developing feedback systems that genuinely elevate individual and group performance. Using nonpunitive individual feedback sessions and educational conferences, discrepant cases can be used as learning tools to discuss strategies for avoiding commonly made mistakes and provide direction for future education and training.”

Extending the model

Well-developed outcomes measures also are working in procedurally driven areas. For example, in breast imaging and interventional radiology, there is a better sense of, and more developed systems for, tracking outcomes, Narayan acknowledges.

“Breast imaging is a much more sub-specialized field and we track things much more routinely, such as false positive rates and the numbers of cancers diagnosed,” he says. “We also have things in place like the MQSA [Mammography Quality Standards Act and Program] that require us to track some of these numbers. We have a much more robust kind of data infrastructure in place [for breast imaging], and it has come a long way in capturing some of these metrics.”

The same holds true for interventional radiology. Narayan points out that clinical trials have looked at areas like chemoembolization and survival rates and quality of life assessment studies of uterine fibroids.

Arl Van Moore, MD, vice president of Charlotte Radiology in Charlotte, N.C., suggested that areas like musculoskeletal imaging, pediatric imaging, and neuroradiology are areas that lend themselves to developing good outcomes-based metrics. “Wherever we have some good outcomes data on the insurance company side, that’s where we’ll be able to get some good comparisons and do the math,” Moore says.

“I think there are things that can be learned [from subspecialties like breast imaging] and applied to other field in radiology in terms of the data infrastructure,” Narayan notes. However, he adds, there are going to be areas of radiology—areas involving particularly rare clinical conditions—where developing these outcomes-based measures is going to be much more problematic.

“As a community, we should start moving towards developing the infrastructure we need to collect outcomes metrics,” he says. “And that means figuring out how to create IT systems and electronic medical records that allow us to capture outcomes better, and not only capture those outcomes but track them to the results of our imaging studies as well.”

McEnery agrees that radiology “has to be aligned more directly with electronic medical records.”  The information in the EMR, he says, should be able to demonstrate that there are appropriate reasons why a patient is having a particular study performed, that there is a reasonable expectation that study will move the patient’s care process along and, most importantly, McEnery says, that the study done in the radiology department is optimized for the clinical decision being made. “Today, that’s not necessarily always the case,” he notes.

McEnery says one problem is that radiology often “works in a vacuum.” Radiologists are at the mercy of referring physicians when it comes to providing them with appropriate clinical histories and ultimately ordering the appropriate exam.

“When these systems come into play, radiology will be able to impact the ordering process and, in turn, optimize the study being done for the patient and optimize the report generated based on the clinical decision or information that was provided,” McEnery  explains. “Then the clinician will be able to get the most information possible out of the resources expended to create the imaging study that is being reviewed.”

Impact on radiology practices

Moore made the cogent point that as time goes on, any new value-based systems for paying physicians are going to rely on quality metrics—and hence, outcomes. “There will be increasing attention paid to what those metrics are and how we develop them,” Moore believes. “There’s going to be pay-for-performance as far as meeting those metrics, so if we’re not showing up, we’re going to earn less money per unit of work.”

McEnery believes the change to an outcomes-based based paradigm is going to result in more customization of imaging, whether it be from the standpoint of the way the studies are obtained and performed, or the ways that reports are generated,

“There will be more attention paid to having the report be more customized for a specific indication in order to drive future decisions,” he says. “It will be structured reporting, but it will be structured reporting more focused on the indications and presentation [of a particular patient] to ensure that the radiologist comments on the specific finding or findings that were expected or of concern to the physician ordering the exam. That will be inevitable.”

The information delivered to the patient will be different as well, McEnery says. “The product of the radiologist will change, and how it is presented to the patient will inevitably change to be more specific. So an MRI for someone who has a suspected bone tumor should be different in presentation than for someone who has a meniscal tear.”

Schnall believes that the report is a weak link in radiology’s ability to measure outcomes. “One of the themes for radiologists and radiology is that part of the problem is in our house and the way we report,” he says. “It is under our control, and we can solve it. That’s necessary to do in order for us to be able to enable broader solutions for figuring out and ensuring the value

of what we do. Most people in a hospital will say, ‘Those radiology reports are really important, and if you took them way, I really couldn’t practice medicine.’ But, it’s really hard to measure that.”

According to Schnall, the department of radiology at the University of Pennsylvania—like other radiology departments and practices—has gone from a situation in which all of its reports were completely unstructured to one in which radiologists are able to add structured elements in their reports “that are very actionable, very structured and that are measurable.”

At the University of Pennsylvania, the radiology department followed the BI-RADS model in coming up with a way to improve communication among radiologists and clinicians when it comes to abdominal imaging findings. “One of the worries you have when you do cross-sectional imaging is that you’ll find masses all over the place,” Schnall says. “Most of them aren’t that important to know about, but they can create a lot of problems.”

Code Abdomen

The radiologists at the University of Pennsylvania developed a standardized assessment coding system they called Code Abdomen, which was designed to improve the communication and monitoring of suspected cancers in the liver, kidneys, pancreas and adrenal glands. Under the system, a radiologist is require to apply a code to each of the four organs on CT, MRI and ultrasound examinations, regardless of the study indication.

Code Abdomen defines nine codes that comprise five categories—benign, indeterminate, suspicious, known cancer and nondiagnostic. For example, code 1 represents a finding of no mass on the organ in question, while code 2 represents the finding of a mass that is benign and needs no further testing. On the other hand, codes 4 and 5 represent suspicious findings that may represent malignancy, or offer clear imaging evidence of malignancy, while code 6 represents a known cancer.

With Code Abdomen “we hope to do several things,” says Schnall. “We hope to prevent downstream activity on masses that don’t need it. For those that are worrisome, we’ve communicated very clearly that they are worrisome. Then, what we can do is create a list and follow up to make sure patients get what they need.”

The latter point is important, Schnall says, because sometimes a patient can “fall through the cracks.” For example, he refers to a case involving a patient who developed a pulmonary embolism after undergoing an orthopedic procedure. The way Penn evaluates a patient for pulmonary embolism is CT pulmonary angiography (CTPA).

“So the patient gets the exam and there’s a little nodule,” Schnall says, pointing out that the University of Pennsylvania also has a process in place for assessing lung nodules. “These nodules are usually nothing, but a fraction of them could be cancer, and depending on the size you might want to follow them to see whether they are growing.”

But, lung nodules also are something that an orthopedist usually doesn’t deal with and once he sees the radiology report may just assume that it is something the primary care doctor will address. “But the primary care doctor may or may not go through the orthopedic procedure records, see that the patient had a CTPA and that a nodule needed to be followed,” Schnall says. “So it gets dropped, and two years later, the patient shows up with lung cancer.”

There’s been variability in the way these things are managed, he adds. “As long as we have a list, we can chase [these patients] down and make sure they get what they needed. It’s a matter of having clear communication standards and using them so that we can inject ourselves more into the care paradigm.”

Outcome measures and contracts

McEnery points out that we are already seeing contracts that include rewards and penalties for system-wide performance on population-based metrics. “Payors likely are going to expect clinical decision-support systems at the front end to assure that imaging studies are being performed that are appropriate for the patients under investigation and also assure that the study performed by the institution was optimized for that outcome,” he says. 

McEnery thinks they’ll also want to know whether the radiology report aligns with the clinical question. “Inevitably, this will result in structured reporting,” he believes.

As far as payor contracts are concerned, McEnery anticipates that they will be “at a higher level” than just fee-for-service, pointing out that MD Anderson has entered into a pilot contract with an insurer to bundle patients with head and neck cancers. “The cost shifts to the institution, but what the insurer is looking for is survival and quality of life, so there will be shared risks with these contracts,” he says.

How does imaging fit into that? McEnery speculates that we will see studies investigating the question of how imaging allows the institution to deliver the care level expected out of these contracts.

“It’s an opportunity as well as a potential threat to imaging,” he notes “I think that the outcome of imaging will probably be intertwined with the outcome the entire health care enterprise delivers back to the payor. So, radiology needs to figure out how to help their institutions optimize patient care.” 

Michael Bassett is a contributing writer for Radiology Business Journal.

Michael Bassett,

Contributor

Trimed Popup
Trimed Popup