The New Quality Mandate: Demonstrating Radiology’s Value

If someone asked you to define quality in radiology, what would you say? The precise definition of quality is certainly nebulous, but it has never mattered too much—until now. Under fee-for-service reimbursement, imaging programs measured quality as performance in patient safety, efficiency, and volume growth. These measures were sufficient for the stakeholders radiology served—patients knew they were safe, referrers received reports quickly, and hospitals reveled in financial growth—and nothing more was expected of us. Fee-for-service reimbursement, however, is disappearing. In this era of value-based payment, we must actively combat the risk of commoditization (including payors’ steering of patients to the lowest-cost site of care—regardless of quality—and patients’ heightened price sensitivity as high-deductible health plans proliferate). Imaging leaders must demonstrate the quality of their programs if they wish to remain providers of choice. Efficient service and healthy growth, while still necessary in this value-based world, don’t go far enough. Imaging has to help its stakeholders achieve their goals in the accountable-care environment: We have to prove our value. While value, like quality, can mean many things, radiology’s value proposition now hinges both on its ability to deliver the best possible product and on its willingness to assume new responsibilities. Success in both components will improve care and control cost—the primary goals of accountable care. What does this look like? First, we have to elevate our core competency: the interpretation and report. Next, we must extend our roles to include participation in care coordination and, more broadly, population-health management. Perfecting Image Quality The first step in creating radiology’s core product is image acquisition. Any error in acquisition can affect the ability of physicians to interpret the exam, delaying patient care and increasing costs. To ensure image quality, strong technologist performance must first be secured. While many programs formally evaluate technologists using various measurements, it is time to move beyond conventional review grids (Figure 1). UCLA Health (Los Angeles, California), for example, reviews technologists annually on high-risk, low-volume exams; these assessments ensure that technologists retain those rarely required (but critical) competencies.

imageFigure 1. Research by the Advisory Board Co demonstrates that performance measurement of clinical competency (three bars at top) is widespread, but measures that focus on other quality values (three bars at bottom) are less common in the 51 organizations surveyed.

The person best suited to evaluate image quality is a radiologist; as one study1 shows, however, radiologists and technologists might have very different opinions. When presented with the same 122 CT exams of the head, technologists rated image quality as poorer than radiologists rated it. In addition, technologists indicated that they would opt to repeat exams at twice the rate that radiologists would. To address this discrepancy, radiologists at Magee-Womens Hospital of UPMC (Pittsburgh, Pennsylvania) alert administrators if they notice any exams with high repeat rates or frequent technologist errors. Then, managers review a sample of three to five exams for each technologist and organize an education session for the whole team. The facility’s PACS also allows radiologists to provide feedback to technologists in real time. Repeating exams can result in unnecessary costs, time, and radiation dose. Allowing radiologists to provide feedback, both positive and negative, can simultaneously educate technologists on proper image acquisition and alert program leaders, if remedial action is necessary. Improving Interpretation Accuracy Once the image has been acquired, the next step is the radiologist’s interpretation. Peer review is perhaps the most common way to evaluate radiologists’ accuracy; however, some programs are starting to take a more expansive view of radiologists’ performance by soliciting external feedback. The peer-review process has traditionally taken one of two forms: retrospective workstation-integrated review or prospective double interpretation and review. The lack of standardized methodology across programs and the inherent subjectivity of peer review make it an imperfect measure, however. One common criticism of peer review is that it can create a competitive environment and strain personal relationships. Providing the opportunity to give positive feedback can shift the culture to one of collaborative, nonpunitive improvement. Consulting Radiologists (Minneapolis, Minnesota) added a Great Catch button to the peer-review system so that radiologists can commend each other for exceptional work. Some organizations are moving beyond peer review by providing tools prior to interpretation, as a noncompetitive tactic for preventing errors. One major teleradiology provider has created structured report templates, both to guide radiologists through the interpretation and to standardize the report framework. In addition, if the study is one of 25 considered to be at highest risk for radiologist error, a risk-assessment box with Do Not Miss recommendations alerts the radiologist before he or she begins the interpretation. Providing these tools has allowed this provider to improve accuracy proactively—and without targeting individual radiologists. Another way to support the interpretation is to provide radiologists with comprehensive patient-history information. Massachusetts General Hospital in Boston created an IT platform called the Queriable Patient Inference Dossier, or QPID, which allows radiologists to request additional patient information (such as chronic-disease history, pathology reports, and even operative results). While there has been debate over whether this information can create bias in the interpretation, the transition to population health necessitates a patient-centered approach; in turn, this calls for access to any relevant patient information. Ultimately, if you want to confirm that your radiologists are doing a good job, you need to ask your stakeholders. Spectrum Medical Group (South Portland, Maine) sends out 360-degree reviews (multisource assessments) to various stakeholders who are in regular contact with each radiologist. These subspecialty-specific surveys are sent, every other year, to referrers, peer radiologists, hospital radiology directors, technologists, and others who evaluate radiologists on clinical, service, and interpersonal skills. These results are then incorporated into formal radiologist reviews. Coordinating Patient Care To demonstrate value in today’s world, traditional interpret-and-report radiology is insufficient. Imaging programs must now look beyond the core product and consider how imaging fits into the larger care continuum. Once you have elevated clinical performance to demonstrate value, it is time to add value. One method is verifying the referrer’s comprehension of the radiology report. If the referrer does not see the report, patient care might be completely ignored—and the radiologist’s work rendered useless. Gundersen Health System (La Crosse, Wisconsin) recently introduced a comprehensive case-tracking program for follow-up contact with referring physicians. If follow-up care is needed, the radiologist makes a note during dictation. A staff member runs weekly reports from the PACS, compiles a list of cases that require follow-up care, and sends messages to the appropriate referrers through the electronic medical record. These messages don’t just ask referrers to confirm that reports have been received; they require referrers to indicate their plans of action. Asking for this level of detail does two things. First, it ensures the active engagement of physicians in patient care, making sure that physicians have actually read through, absorbed, and acted on the radiology report. Second, it closes the communication loop between the referrer and the radiology department; the department keeps a record of each physician’s response as documentation that the patient’s care was successfully handed off to him or her. This follow-up system relies on the coordination of care (from the radiology department to the referrer), but what if the patient comes to the radiology department from the emergency department and has incidental findings that need follow-up care? In this case, the radiology department contacts the patient’s primary-care physician, rather than the referrer from the emergency department. If a patient doesn’t have a primary-care physician to receive results and coordinate follow-up care, the radiology department sends the patient a letter instructing him or her to call and schedule an appointment with an internal-medicine resident. In this way, the radiology department has gone from just producing reports to making sure that they have been properly used. The program looks beyond the walls of the department and assumes responsibility for connecting patients with downstream care. Managing Population Health Risk-based payment models—and value-based care, more broadly—require providers to zoom out from individual, episodic patient care and manage the health of populations across the larger care continuum. Radiology is not exempt from this mandate; in fact, as a diagnostic service, its importance is heightened. If radiology programs remain segregated from the systems that they serve, they run the risk of being excluded from reform strategy and might miss the opportunity to share expertise. Two ways that radiology can add value in population-health management are the promotion of patient activation and the demonstration of the impact of imaging on a care pathway. Patient activation is a measure of engagement in (and general ability to manage) one’s personal health care; improving patient activation has been shown2,3 to increase satisfaction, medication adherence, and quality of life significantly—while lowering patient costs. Inactivated patients, in contrast, might not show up for imaging exams and might not return to their physicians following their exams. Imaging providers must prevent this from happening by actively engaging patients—from scheduling to results delivery. Staff members can call patients ahead of time to answer questions, technologists can confirm the patient’s comprehension of the exam, radiologists can explain results, and front-desk staff can encourage patients to call with questions after the exam (Figure 2). In addition, radiology programs are beginning to add value for risk bearers by demonstrating the impact of imaging on specific care pathways, for both screening and diagnostic exams.

imageFigure 2.Institutions that require technologist consultations before exams (right), at 78.9%, far outnumber those that offer radiologist consultations (left), at 31.6%, according to an Advisory Board Co survey of 38 facilities.

When Baptist Health South Florida in Miami launched an initiative to improve quality and cost, radiologists took a proactive role by identifying clinical areas where imaging could affect those goals. They found that about 80% of emergency-department patients with chest pain were undergoing coronary angiography; this rate far exceeded the proportion of patients expected to be at high risk for coronary-artery disease. The radiologists created a new multidisciplinary committee to review the evolving research on the benefits of using coronary CT angiography (CCTA) to evaluate the need for invasive treatment. Ultimately, the committee drafted guidelines whereby emergency-department physicians assign chest-pain patients to one of five levels, corresponding with their risk of acute coronary syndrome. Often, level 4 patients (those at low-to-moderate risk) do not need coronary angiography and are thus diverted to CCTA first. The results of the CCTA exam then indicate the appropriate action plan for the patient—catheterization-laboratory procedures, additional imaging, or discharge. After a year, 60% of the emergency department’s chest-pain patients—triple the previous number—were undergoing CCTA, and both costs and lengths of stay decreased significantly for those patients.

imageFigure 3.Impact of the coronary CT angiography (CCTA) pathway on patient outcomes.

Baptist Health South Florida isn’t the only organization evaluating CCTA to optimize care. Many institutions are instituting similar CCTA pathways, as multiple trials and studies4-6 have now revealed the significant benefits of using CCTA for specific patient indications (Figure 3). A study6 published just months ago found that the standard of care, when compared with the level 4 CCTA diversion, was associated with a 5.5 times greater risk of admission, a 1.6 times longer expected length of emergency-department stay, a fivefold greater likelihood of returning to the emergency department within 30 days for recurrent chest pain, and a sevenfold greater likelihood of invasive coronary angiography. Research on care pathways for certain populations clearly can add tremendous value for both patients and providers. One central tenet of population health is that homogeneous care does not sufficiently serve heterogeneous populations: Radiologist involvement in patient-centered, pathway-specific research is increasingly important as radiology faces commoditization and as population-health management demands innovation across the continuum of care. According to Richard Duszak Jr, MD, CMO and senior research fellow of the ACR® Harvey L. Neiman Health Policy Institute, if radiology doesn’t prove its value, it’s just a commodity. In order to survive, radiology must exhibit both clinical excellence and a commitment to improving the entire patient-care continuum. In doing so, it will present itself as an indispensable partner in providing high-quality care. Ben Lauing is a research analyst with the Imaging Performance Partnership at the Advisory Board Co.

Trimed Popup
Trimed Popup