Clinical Quality Improvement: Practitioners See Unlimited Applications for Imaging Informatics

It takes a keen eye and astute visual memory to be a radiologist. These traits of the profession are unlikely to change any time soon. Efforts to exploit digital image data, however, are on the cusp of raising the radiologist’s powers of perception through informatics-based tools that could accelerate clinical quality improvement efforts. The movement is just starting to find its feet, but (much like the evolution of PACS development) promises to become pervasive. It’s just a matter of time.

Informatics tools offer the tantalizing potential to add both a higher level of certainty and efficiency to the interpretation process and quantify the value of the service. The area of quantitative imaging is a particularly rich source of progress.

Jeffrey P. Leal, technical director of the Image Response Assessment Team (IRAT) Laboratory at Johns Hopkins School of Medicine in Baltimore, describes the goal of informatics in clinical quality improvement as a three-pronged initiative: extracting data, structuring data, and mining data using automated and semi-automated tools. With co-director Richard L. Wahl, MD (who is about to take the reins of the Mallinckrodt Institute of Radiology at Washington University School of Medicine in St. Louis), Leal and the IRAT laboratory team have developed a software package to automate the analysis of Positron Emission Tomography Response Criteria in Solid Tumors (PERCIST) 1.0.

PERCIST 1.0 is the functional imaging complement to anatomic imaging’s Response Evaluation Criteria in Solid Tumors (RECIST), a set of guidelines used to assess changes in tumor size after treatment using CT or MRI imaging. PERCIST is based on the premise that quantitative non-anatomic imaging approaches can be used as a biomarker of cancer response. FDG-PET is one of oncologic imaging’s most powerful biomarkers, and FDG uptake changes during treatment can contribute information relating to response of a tumor to treatment and whether a cancer treatment is effective.

FDG uptake in tumors is correlated with the number of viable cancer cells in a given tumor, and therefore FDG signal is related to the viable cell number. The premise of PERCIST 1.0 criteria, as explained by Wahl, is that cancer response as assessed by PET is a continuous and time-dependent variable. A tumor may be evaluated at any number of times during treatment and glucose may rise or fall from baseline values. It is also likely that the standardized uptake value (SUV) will vary for the same tumor and the same treatment at different times, as the SUV is a measure of the varying glucose level at different times.1

PERCIST was developed as a way to precisely define response criteria to enable standardized comparison of acquired data. Leal explained, “Rigorous standards and quality control are necessary to precisely quantify PET imaging of cancer with FDG. The basis for PERCIST is the recognition that each person has a basal metabolic state with respect to FDG. For this reason, the interpretation of a patient’s disease signal needs to be made with respect to and in relation to that basal metabolic activity.

Auto-PERCIST, the free-for-use beta software developed at Johns Hopkins, takes the methodologies of PERSIST 1.0 and automates them. The software uses the liver as a reference area. Leal and colleagues developed an algorithm that locates the liver and an appropriate volume of interest within the liver in order to make its measurements. Measurements are independent of the actual reader and the variability of individual assessment. The software automates the identification of the liver and baseline assessment measurements, and then proceeds to identify lesions. Leal and colleagues presented a poster at RSNA 2013 of a pilot study that compared the performance of Auto-PERCIST with a radiologist manually measuring PERCIST 1.0 metrics.

“This quantitative assessment of measurements is a first step to objectively and efficiently acquire data that radiologists can use,” Leal explains. “Once we have measurements contained in an objective database of 10,000 or more tumors, in addition to the pathology and corollary information, then we can unleash the tools looking for correlations and relationships, and that may tie data together that we may not have thought of ourselves as having a relationship to each other.”

Leal predicts many future applications to emerge. “It’s a matter of seeing the signal within the noise,” he says. “New algorithms, and more objective measurements will help us better identify where the noise is, and how to see through it.”

RSNA and the QIBA effort

Leal credits much of the development of quantitative imaging standards to the work of the Quantitative Imaging Biomarkers Alliance (QIBA). Daniel C. Sullivan, MD, professor and vice chair of research, department of radiology, Duke University Medical Center, Durham, NC, has chaired the alliance since it was founded in 2007. QIBA includes more than 150 entities in its membership database, including academic institutions, vendors, and regulatory agencies.

“Quantitative imaging is the extraction of quantifiable features from medical images for the assessment of disease or normality,” he says. “It includes the development, standardization, and optimization of anatomical, functional, and molecular imaging acquisition protocols, data analyses, display methods, and reporting structures. These features permit the validation of accurately and precisely obtained image-derived metrics with anatomically and physiologically relevant parameters, including treatment response and outcome.”

The RSNA initiated the alliance to help improve the development of qualitative imaging standards. QIBA was tasked with understanding and reducing errors where possible so that quantitative results are accurate and reproducible across patients, time-points, sites, and imaging devices and software from vendors.

“Variability in radiology is a huge problem that has existed for decades, because interpretations are qualitative,” Sullivan says. “There is a large amount of human variability and a large amount of variability inherent in scanners and in software programs. It’s not possible to reduce variability unless there are standards and data can be reproduced. This is very important for radiology, because if our specialty can’t move toward more reproducible results, the service we provide will eventually become diminished. Consumers of imaging are expecting more standardized results, and the linking of results more clearly to the tests and treatments that are being delivered to patients. Variability leads to poorer outcomes and additional costs.”

QIBA’s work has been to establish a methodology by which multiple stakeholders test various hypotheses about the technical feasibility and medical value of imaging biomarkers. The first two to three years were spent working out issues, concepts, and procedures followed by the development of QIBA Profiles. Profiles, adapted in format from the Integrating the Healthcare Enterprise (IHE) model, have been published by the QIBA Metrology Working Group for volumetric CT, FDG PET/CT, and dynamic contrast-material–enhanced MRI. These profiles tell a user what can be accomplished by following the profile and contain details that let vendors know what is required for their products to comply with the profile. Recently, work began on shear-wave elastography ultrasound, in addition to other CT, MR, and PET biomarkers.

The use of quantitative imaging extends well beyond evaluating response to cancer treatment. Sullivan says it will be used to measure chronic diseases like COPD, fatty liver disease, or Alzheimer’s, where it is not obvious that the patient is improving—or declining—just by clinical determination. Imaging can identify if a patient’s condition becomes stable, gets worse, or even improves. The hope is that quantitative imaging results will provide this information.

While QIBA is creating a foundation for quantitative imaging, Sullivan says that the alliance is in the process of assembling large databases of clinical scans that vendors could use to test their algorithms. A new, alternative approach is to create synthetic scan data, or digital reference objects. Defined lesions and tumors are being placed inside real CT scans. With both, the objective is to provide industry with large standardized databases to enable vendors to do robust testing. As with IHE profiles, vendors will be able to say that their products meet specific QIBA profiles. “Everyone will benefit, especially radiologists,” he concludes.

Making unstructured data useful

Nonetheless, most of the data in radiology’s foremost products—images and reports—is not structured. Daniel L. Rubin, MD, MS, is a participant in the National Cancer Informatics Program’s (NCIP) Annotation and Image Markup project (AIM), which provides a standardized imaging informatics infrastructure and tools to enable image annotation data—regions of interest, descriptions of abnormalities, measurements, and other related metadata—to be uniformly acquired, searched, and shared. The majority of data available in the header of DICOM images describes the physical nature of pixel data, but this information does not describe what the image is or what it contains that may be of clinical relevance—data that are addressed by AIM.

“AIM is all about capturing the results of images—things radiologists or other physicians see in images, infer from them, draw on them to delineate abnormalities, measure from them, or compute on them,” Rubin explains via email. “Essentially, AIM captures the ‘results’ of images, which is a combination of annotations made on images (graphical objects, measurements, calculations, and quantitative image features) and the semantic information (recorded in narrative radiology reports).”

In an article (in press) about the AIM Foundation Model, Rubin and co-authors explain that their objective is to create “a standardized, semantically interoperative information model of image pixel meaning and to provide standardized storage formats such as DICOM, XML, and HL-7 Clinical Document Architecture (CDA).”2

An assistant professor of radiology at Stanford University School of Medicine, Rubin heads its Laboratory of Imaging Informatics. Researchers at the affiliated Stanford Quantitative Imaging Laboratory (QIL) are key contributors to the AIM project, developing methods to describe the semantic content in images using AIM technology.

His group developed the freely available electronic Physician Annotation Device (ePAD; epad.stanford.edu), providing a Web-based DICOM imaging workstation for viewing and annotating images in AIM format, interoperable with other AIM-compliant tools. Like QIBA, while focused on research-use cases, software that utilizes AIM is ultimately expected to enhance the value of diagnostic images to the radiologists who interpret them.

“Since AIM captures these results in standardized, machine-accessible format, AIM enables a broad range of applications that rely on imaging, such as quality assessment, business intelligence, decision support, data mining, and scientific discovery,” he says by email. Rubin believes that AIM has the potential to substantially advance the value of images and the quality of medical imaging practice by enabling a wide range of computerized applications with the ability to meaningfully consume image data.

“This would be very challenging without AIM, since images are complex, unstructured information objects,” Rubi n observes. “In addition, AIM enables interoperability of image data and integration of image data with clinical and genomic data through its standardized format and compatibility with other image data standards (DICOM Structured Report and HL-7 Clinical Document Architecture), and ultimately cross-institutional data aggregation and data-driven medical practice.”

Informatics tools in practice

The radiologists of Radiology Associates of Canton consider their professional scope not just to be interpreters of diagnostic images but as proactive consultants in population health management with their affiliated hospital: Aultman Hospital in Canton, Ohio. Their consultative perspective focuses on the relevance and value an imaging exam offers to patients with specific medical conditions. Use of clinical informatics software is essential to achieve this, according to Syed Zaidi, MD, practice president.

First, Zaidi emphasizes the need for radiologists to produce an information-filled report free of errors. An error-catching software program that identifies laterality and gender errors by automatically comparing the report content with data from a RIS and PACS has been very helpful. “We discovered that most of us will occasionally make and overlook these types of mistakes, but that one radiologist in our group did this quite often,” Zaidi notes. “After he saw comparative data, he became much more cognizant that he had to improve, and he did. All of us want our physician colleagues to be impressed with and trust the quality of our reports.”

“A quality radiologist is someone whose opinions other physicians trust, especially when recommendations for additional testing are made,” he continues. “The reports we generate are a type of consultation; they should be actionable. So the more data we have to analyze our recommendations, the better we can become when we make them.”

The clinical informatics software Aultman’s radiology department uses also tracks whether recommendations made by the radiologists were acted upon, and if so, reports the yield of the follow-up imaging study. Zaidi is adamant about this: Radiologists need to know if the follow-up study produced useful information or not.

Information of value

Knowing if a follow up study produces information of value is of particular importance to Radiology Associates of Canton, which co-manages the hospital’s radiology departmen t since 2013 and shares responsibility for exam-ordering patterns. The software monitors this as well, and can identify outlier physicians as part of the total ordering population or within a specified group, such as emergency department physicians.

“When a physician orders a CT angiogram to rule out pulmonary embolism, what is the outcome?” asks Zaidi. “By being able to measure outcomes data, we may be able to intervene with a physician who is identified as over-ordering this exam and who has a much lower positive-predictive-value ratio, show him the data, and make some alternative recommendations.

“While this isn’t directly related to report quality, it very much is related to a radiologist’s role in contributing to the quality of patient care. We can say, ‘Look at this data. Why not consider X, Y, and Z.’ And when clinical informatics software can begin to correlate a patient’s symptoms with the exam ordered and the outcomes, we’ll be able to add even more value and make our consultations even more relevant.”

Similarly, by using clinical informatics software to objectively measure the clinical accuracy of the radiologist, drilled down to specific types of exams or health conditions, the radiologist can become more conscious and careful in situations where he may be underperforming compared to his colleagues. An additional use of the software to monitor the results of CTA studies is the fact that Aultman physicians have determined that an escalating number of younger patients are diagnosed with pulmonary emboli. Use of the analytics software may generate data to explain this trend.

All of these processes began early in 2014 when Aultman Hospital installed clinical informatics software in its radiology department. Although still very early in the implementation phase, Zaidi said that its impact has been significant and believes its potential is huge: for patients and their physicians, for radiologists, for the hospital, and for payors. He now leads a partner company formed by Radiology Associates of Canton to define radiology’s role in population health management and assist providers and payors in achieving higher quality and better value from their medical imaging spend.

When radiologists at the Hospital of the University of Pennsylvania (HUP) in Philadelphia used the same clinical analytics software to identify laterality and gender errors, the error rate dropped by almost 50%, says Woojin Kim, MD, assistant professor of radiology and associate director of imaging informatics. Kim attributes this reduction to enhanced awareness and behavior modification, resulting in consistently more accurate reports. The clinical analytics software also can help departments and practices qualify for CMS’s Physician Quality Reporting System (PQRS) incentives by automatically identifying reports that include reportable measures and flagging those without proper documentation of PQRS measures.

Quantifying value

While a number of initiatives are underway to leverage imaging informatics for the improvement of quality in radiology, the University of Pittsburgh Medical Center (UPMC) is addressing the added challenge of quantifying value, reports Rasu Shrestha, MD. Shrestha recently transitioned from UPMC’s vice president of medical information technology to a new role as chief innovation officer and president of the medical center’s Technology Development Center.

Charged with leveraging innovation to take healthcare to the next level, Shrestha is currently working on an initiative to quantify value in healthcare. “When you talk about value-based imaging, it is about outcomes, it is about patient care, it is about garnering more efficiency so you can lower cost,” he says. “All of these are really important aspects of value.”

A central focus of the project is looking at the payor–provider perspectives of value, which Shrestha refers to as the “the yin and yang” of value, and specifying metrics that define value for the purpose of quantifying value. “It’s one thing to talk about value, but it is another to create a framework, an algorithm that first defines what value is and then quantifies it,” he says. “We are looking at all of the clinical-quality metrics, and the business-growth metrics. The opportunities are absolutely tremendous.”

Linking quality to outcomes and outcomes to payment is the ultimate goal, Shrestha says. “That is when things actually become doable, when you incentivize the right types of behavior to happen,” Shrestha says. “Those are the types of things we are working on, the importance of looking at protocoling, or ordering a study, or reading an exam, or even the care collaboration that needs to ensue around sharing the studies and viewing of the results.”

In his mission to take healthcare to the next level at UPMC, Shrestha is enthusiastic about the opportunities for radiology to participate in the care team. “It’s not just physician X orders the study, radiologist Y reads the exam, the report goes out, and the patient gets better,” Shrestha says. “The new reality of accountable care really pushes for a team-based approach to care and how you enable the newer reality of care collaboration, care coordination, utilization management—these are specific opportunities that are really important for the sustainability of imaging as a specialty.”

The opportunities for informatics in clinical quality improvement are plentiful—and the implications are huge.

References

  1. Wahl RL, Jacene H, Kasamon Y, Lodge MA. From RECIST to PERCIST: Evolving considerations for PET response criteria in solid tumors.” J of Nucl Med. 2009;50(Suppl)1:122S–50S.
  2. Mongkolwat P, Klever V, Talbot S, Rubin D. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation Model. J Digit Imaging. 2014 Jun 17 [Epub ahead of print].
Cynthia E. Keen,

Contributor

Trimed Popup
Trimed Popup