The New Quality Mandate: Demonstrating Radiology’s Value

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
Screen Shot 2014-02-10 at 2.07.09 PM.png - The New Quality Mandate: Demonstrating Radiology's Value

If someone asked you to define quality in radiology, what would you say? The precise definition of quality is certainly nebulous, but it has never mattered too much—until now. Under fee-for-service reimbursement, imaging programs measured quality as performance in patient safety, efficiency, and volume growth. These measures were sufficient for the stakeholders radiology served—patients knew they were safe, referrers received reports quickly, and hospitals reveled in financial growth—and nothing more was expected of us. Fee-for-service reimbursement, however, is disappearing. In this era of value-based payment, we must actively combat the risk of commoditization (including payors’ steering of patients to the lowest-cost site of care—regardless of quality—and patients’ heightened price sensitivity as high-deductible health plans proliferate). Imaging leaders must demonstrate the quality of their programs if they wish to remain providers of choice. Efficient service and healthy growth, while still necessary in this value-based world, don’t go far enough. Imaging has to help its stakeholders achieve their goals in the accountable-care environment: We have to prove our value. While value, like quality, can mean many things, radiology’s value proposition now hinges both on its ability to deliver the best possible product and on its willingness to assume new responsibilities. Success in both components will improve care and control cost—the primary goals of accountable care. What does this look like? First, we have to elevate our core competency: the interpretation and report. Next, we must extend our roles to include participation in care coordination and, more broadly, population-health management. Perfecting Image Quality The first step in creating radiology’s core product is image acquisition. Any error in acquisition can affect the ability of physicians to interpret the exam, delaying patient care and increasing costs. To ensure image quality, strong technologist performance must first be secured. While many programs formally evaluate technologists using various measurements, it is time to move beyond conventional review grids (Figure 1). UCLA Health (Los Angeles, California), for example, reviews technologists annually on high-risk, low-volume exams; these assessments ensure that technologists retain those rarely required (but critical) competencies.

imageFigure 1. Research by the Advisory Board Co demonstrates that performance measurement of clinical competency (three bars at top) is widespread, but measures that focus on other quality values (three bars at bottom) are less common in the 51 organizations surveyed.

The person best suited to evaluate image quality is a radiologist; as one study1 shows, however, radiologists and technologists might have very different opinions. When presented with the same 122 CT exams of the head, technologists rated image quality as poorer than radiologists rated it. In addition, technologists indicated that they would opt to repeat exams at twice the rate that radiologists would. To address this discrepancy, radiologists at Magee-Womens Hospital of UPMC (Pittsburgh, Pennsylvania) alert administrators if they notice any exams with high repeat rates or frequent technologist errors. Then, managers review a sample of three to five exams for each technologist and organize an education session for the whole team. The facility’s PACS also allows radiologists to provide feedback to technologists in real time. Repeating exams can result in unnecessary costs, time, and radiation dose. Allowing radiologists to provide feedback, both positive and negative, can simultaneously educate technologists on proper image acquisition and alert program leaders, if remedial action is necessary. Improving Interpretation Accuracy Once the image has been acquired, the next step is the radiologist’s interpretation. Peer review is perhaps the most common way to evaluate radiologists’ accuracy; however, some programs are starting to take a more expansive view of radiologists’ performance by soliciting external feedback. The peer-review process has traditionally taken one of two forms: retrospective workstation-integrated review or prospective double interpretation and review. The lack of standardized methodology across programs and the inherent subjectivity of peer review make it an imperfect measure, however. One common criticism of peer review is that it can create a competitive environment and strain personal