Gearing Up for Value-based Payment: The Race to Define Quality in Radiology

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

Today’s radiology dashboards let you know how your department or practice is running. They chart patient flow; report-turnaround times; critical-results reporting; and dozens of other data points that reflect cost, efficiency, productivity, and (sometimes) effectiveness. One of the primary goals of these dashboards is to support quality improvement, as CMS and other payors begin to link payment to performance—and to define performance using quality measurements. Yvonne Y. Cheung, MD, MS, holds quality-improvement black-belt certification from the American Society for Quality. Cheung, a musculoskeletal radiologist at Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire), holds dual associate professorships in radiology and orthopedic surgery at Dartmouth’s Geisel School of Medicine. She also is Dartmouth-Hitchcock Medical Center’s radiology vice chair for quality and safety. In 1896, Dartmouth College was the site of the first clinical radiography ever performed in the United States. Today, Dartmouth-Hitchcock Medical Center radiology, like many of its sister hospital departments, is creating quality-improvement measures and displaying them on dashboards (along with run charts), Cheung notes. As far as Cheung is concerned, health care is finally catching up with Motorola’s Six Sigma™ quality-improvement principles: recognize, define, measure, analyze, improve, and control. These were initiated by Motorola in 1985, and manufacturers have since applied them to reduce the production of flawed goods. “This system got adopted by different industries and is now slowly moving to health care,” Cheung says. “There are different levels and abilities in improvement work, based on certain methodologies. We are making changes at all levels.” In fact, there are many radiology quality-improvement initiatives in the works now, coming from CMS, hospitals, health networks, specialty societies, insurers, and imaging providers. For health care as a whole, some argue, the proliferation of quality measures has gone too far (and too quickly): Quality markers are getting in the way of care. That’s what members of the US Senate Committee on Finance complained of in June 2013, when they called for more attention to be paid to patient outcomes as quality measures. At that time, Sen Max Baucus (D–MT), the committee’s chair, was quoted as saying that Medicare uses 1,100 different measures in its quality-reporting and payment programs. He then asked whether we really need more than a thousand measures.¹ The committee’s senators went on to suggest that quality outcomes should be tied to provider payments not in traditional fee-for-service reimbursement models, but in team-based delivery models (such as accountable-care organizations). Cheung doesn’t think that there are too many quality-improvement initiatives in radiology, but she does agree that, soon enough, quality assessment and reimbursement will go hand in hand. “Quality improvement is something big,” she says, adding that payors are going to ask for it. “This is something that everyone should be doing,” she notes. ACR’s Top-down View Judy Burleson, MHSA, the ACR’s senior advisor for quality metrics, says that today’s quest for quality improvement in health care began with a 1999 report by the US Institute of Medicine.2 It found that up to 98,000 preventable deaths per year were occurring in hospitals due to lax standards or unobserved care protocols. “That was the impetus for a number of programs and for federal legislation that supported quality and safety initiatives in Medicare, and then in private payor and purchaser groups,” Burleson says. “That was the wake-up call, and a number of programs were initiated from that.” While the Institute of Medicine’s report called for preventable deaths to be halved in a decade, that hasn’t happened. A recent survey of 10 North Carolina hospitals found a patient-harm rate of 25.1 per 100 admissions, and the estimated number of preventable hospitals deaths annually remains close to 90,000. Some estimates3 suggest that today, there is a one-in-seven chance that a hospital patient will suffer harm. To what degree the old and the new preventable-death/harm measurements are comparable isn’t clear, and neither are the reasons that the preventable-death rate is nearly unchanged. What is clear is that a massive quality-improvement infrastructure is being laid down in health care, with radiology deeply involved; its intent, more than ever, is to deliver effective care. According to Burleson, quality measures largely center, thus far, on fee-for-service care. The ACR and other specialty organizations have been responsible for many quality initiatives; so has CMS, often using private consultants (such as the Lewin Group). Quality standards also have been developed at single institutions, and some have then spread throughout the industry. The ACR is involved in quality measures ranging from imaging-appropriateness guidelines to CMS initiatives such as the Physician Quality Reporting System (PQRS), the physician value-based payment modifier, and physician quality and resource-use reports (QRURs). It operates the National Radiology Data Registry (NRDR), which is likely to provide critical guidance in the future. The most well known NRDR registry is probably the Dose Index Registry, which allows participants to compare their CT dose indices with regional and national benchmarks. Participating facilities receive periodic feedback comparing their results (by body part) with overall registry values. The NRDR includes a CT colonography (CTC) registry that documents the effectiveness of CTC and helps to develop benchmarks for CTC process and outcomes measures. The ACR Night Coverage Registry documents the in-house night-reading methods that work best. Radiology practices that join the NRDR receive twice-yearly benchmark reports on how they stack up in each registry category. The ACR also operates RADPEER™, which selectively reviews prior reports from its clients to make sure that radiologists are interpreting exams correctly. This broad-based program attempts to reassess radiologists’ skill levels. Burleson says that there are many elements of quality improvement that make the process difficult. On one side, there is the need to lower expenses in health care. “The cost of health care, at its current rate, is unsustainable, if you look at the Medicare trust fund and the growing volume of seniors. Another impetus for improving quality is getting value and paying for what is necessary,” she says. “Just being efficient may not be the best care, so you must also evaluate the care provided.” Generating evidence of value and effectiveness is tricky, however. “Accuracy of diagnosis is really important, but it’s pretty difficult to pinpoint and measure, even though there are a number of peer-review programs that look at that,” Burleson says. “You can use peer assessment as a training program to work through peer review, but accuracy is an element that’s difficult to manage, on a national scale.” Appropriate use of imaging is a bedrock quality-improvement concept singled out by the ACR, but it has been deployed mostly piecemeal, by asking referring physicians to look at specific exams for specific conditions. Burleson says, “That’s a big area of development for quality measures and initiatives.” Carrots Become Sticks The PQRS has been in use by CMS since 2007. It offers physicians modest incentive payments to report on specified quality measures when they see Medicare patients. For instance, Burleson says, radiologists can report on the radiation dose and/or the time of exposure associated with a fluoroscopic exam. The ratio of exams for which these data were reported to the total number of exams performed creates a PQRS quality score (which can be used to compare providers). Of the hundreds of PQRS reporting measures, only a few are specific to radiology. Tracking and reporting them, patient by patient, is time consuming; that is one reason, Burleson says, that the PQRS participation rate is only about 16%. Many physicians are willing to forgo the incentive payment to avoid the drudgery of reporting. CMS, however, is preparing to turn the payment carrots into penalty sticks. Beginning in 2015, instead of receiving a bonus of up to 2%, radiologists will lose up to 2% if they shun the PQRS. The problem, Burleson notes, is that CMS will use 2013 PQRS data as a baseline, so to avoid the penalty, radiologists should begin reporting PQRS data now. CMS is also implementing a value-based modifier that will rank physician groups on how they perform, Burleson says. The PQRS quality data will be coupled with cost measurements that CMS is developing. A tiered system will be set up (with providers compared on both cost and quality), and high-scoring providers that land in the top tiers will receive higher payments. “The best tier will receive reimbursement above the base level, the middle groups will have no change, and the low-quality and high-cost groups will be penalized,” Burleson says. CMS is beginning the value-based payment-modifier program this year for groups of 100 physicians or more, Burleson says, although the proposed rules4 have been released too recently to analyze. She adds that groups can elect to be tiered. That’s where the QRURs will come in: They will give feedback to physicians and physician groups on where they rank for quality and cost, making it easier for them to decide whether they want to be tiered (in the hope of receiving higher reimbursement). The ACR is working with the ABR and the American Board of Medical Specialties (ABMS) on MOC and PQI projects. The ABR certifies MOC and PQI projects for radiologists and radiology groups; Burleson says that the ACR has worked to make radiation-dose documentation a PQI reporting measure to satisfy the ABR’s MOC/PQI requirements. In 2010, CMS began making incentive payments to physicians who completed both PQRS and MOC/PQI requirements; radiologists must complete 75 CME hours every three years for MOC. At least 25 of those hours must consist of individual or group self-assessment activities within either standardized (ABR-approved) or self-designed PQI projects. Milton J. Guiberteau, MD, FACR, is a nuclear-medicine specialist at St Joseph Medical Center (Houston, Texas); professor of radiology at Baylor College of Medicine; and the ABR’s president-elect and assistant executive director for the MOC program. He says, “Self-assessment is complex, but from the very beginning, it has been integral to the continuous professional development goals of MOC. There are gaps in your knowledge—and gaps in care, unless they are recognized. The hardest place to learn error recognition is in actual practice, but the realization that you have a gap is really the only way to address improvement.” MOC and PQI The MOC incentive payment will turn into a penalty for noncompliance in 2015, but Guiberteau says that the ABR isn’t sure just how that process will work (or whether CMS will tie the penalty to a value-based payment modifier). For diagnostic radiologists who attained their initial board certification in 2002 or thereafter, MOC is mandatory for retaining certification. Radiologists who first received board certification prior to 2002 were given lifetime certification. They don’t have to complete MOC/PQI, although Guiberteau says that it is in their professional best interests to do so. “We believe that MOC is where physicians demonstrate that they are maintaining and updating a knowledge base throughout their careers,” he says. “It’s the first step in maintaining a quality practice.” Guiberteau acknowledges that holders of lifetime board certification might not be inspired to undertake the CME steps required for MOC just to receive a small CMS payment incentive. He thinks that they will become more inspired when the incentive becomes a penalty in 2015. In some instances, the younger physicians in a practice (who must perform MOC and want the CMS incentive) could have conflicts with older physicians who don’t want the added obligation. Guiberteau says, “We don’t monitor practice conflicts, but there is a concern, on the part of younger physicians, that they will miss out on incentive pay or will have to pay penalties.” He suspects that in some practices, lifetime certificate holders will probably be obligated, in the near term, to meet MOC requirements. “The Joint Commission is not requiring it, but it is emphasizing that board eligibility/certification is an important issue,” he says, adding that hospitals are looking at continuous certification as an element in future admitting-privilege decisions. “I don’t think, down the road, that it will surprise anyone if MOC becomes a requirement,” he says. PQI is part of MOC, with at least a third of CME hours to be spent on it. PQI projects can be associated with almost anything in a practice. “There are now myriad PQI projects that would qualify,” Guiberteau says. One common PQI focus is turnaround time for radiology reports. A practice or department that changes a process to lower turnaround time could document that change to meet the PQI requirement. It also could choose to document steps taken to reduce radiation dose, Guiberteau adds. In any quality initiative, he notes, the key to radiologists’ participation is relevance. “If physicians are not finding the reporting or quality measure relevant, they probably won’t respect it, in the sense that they feel that it’s not really helping their practices,” he says. “The idea, for the ABMS and others, is to make sure that the measure is not overly complicated and burdensome, but is relevant enough (and meaty enough) to have an impact on performance.” RECIST Nonetheless, some quality measures are complicated and burdensome enough to spur resistance—even if they are highly relevant. An example is RECIST, the protocol now in place for assessing effectiveness in nearly all tumor-treatment trials. Nancy J. McNulty, MD, is an associate professor of radiology and anatomy at the Geisel School of Medicine. She says that RECIST is only one of the available reporting standards for tumor assessment, but is an important one. “It is essential to accuracy and to ending confusion,” she says. “Our department (and a lot of others) instituted exclusive use of RECIST. We have adopted use of RECIST for all oncology patients, whether or not they are part of a clinical trial.” McNulty says that RECIST ensures that the same protocol will be used to measure tumors before and after treatment, but measuring tumors can be difficult; the RECIST protocols were recently simplified, but still require judgment. McNulty says, “There are a lot of judgment calls on indicator lesions in a RECIST table.” McNulty acknowledges that there has been resistance to RECIST. It’s much faster for radiologists to dispense with RECIST and just compare exams visually. “There are concerns on its impact on time,” she says. “Those are valid concerns.” She adds, however, that only RECIST can give truly comparable assessments of tumor-treatment impact. Mandating RECIST use for all tumor cases at Dartmouth-Hitchcock Medical Center has paid off, too: She says, “Clinical satisfaction from our referrers has been high.” Cheung (McNulty’s colleague) is working on several ongoing quality initiatives. In one, the department undertook to determine why patients were waiting. One factor, it turned out, was that often, the previous patient had arrived late. “One of our parking lots was far away from radiology,” Cheung says, “so we used a closer parking lot in the reminder letter. The other thing we did was encourage patients to show up early: If your appointment is at 9 am, be there at 8:50.” ANCR Maria Damiano, MBA, RT, is corporate manager of information systems and assistant director for medical-imaging IT at Brigham and Women’s Hospital (Boston, Massachusetts). Early in her career there, Damiano saw a way to save money in coronary angiography. What she did became an industry standard—and an example of how one institution can change the way that care is delivered on a broader scale. “The challenge, at the time, was cost,” Damiano recalls. “The cost of silver had skyrocketed for cine film. We were running 60 frames of film per second; what if we reduced it to 30 frames per second? We ran tests and found that the image quality was the same—and you could cut the radiation dose in half. It was good for everyone.” Today, film is no longer used for angiograms, but Brigham and Women’s Hospital retains its leadership role in quality improvement. The latest advance that Brigham and Women’s Hospital has made available is a display program called Alert Notification of Critical Radiology Results (ANCR). Critical-results communication is a weak spot in radiology, and failure to communicate critical results is a leading cause of malpractice lawsuits. Under ANCR, there are three categories of discrepant results, marked with red, orange and yellow alerts. Red alerts go out for results that show immediately life-threatening conditions; orange alerts are for findings that could lead to serious consequences if not attended to quickly, and yellow alerts are for conditions that could become dangerous if not treated. All alerts show up on ANCR dashboards. Red and orange alerts prompt face-to-face conversations or physician-to-physician phone calls between reporting radiologists and referring clinicians. The fact that phone calls or face-to-face consultations have taken place must be documented in the system to close the loop; until the loop is closed, ANCR keeps issuing alerts. For yellow alerts, paging systems or email can be used, but the referring clinician must sign off that the message has been received to close the loop in ANCR. Damiano can’t document that ANCR has saved lives, but she can document that it has reduced the time needed for critical results to be communicated reliably. Today, yellow alerts are normally documented within 15 minutes (the old standard was five days). For orange alerts, she says, the median response time is now less than a minute, and for red, it is three minutes. At Massachusetts General Hospital in Boston, ANCR is being rolled out for cardiology. It’s being implemented for pathology and noninvasive cardiology at Brigham and Women’s Hospital, which also is studying how ANCR might be used to ensure that follow-up exam recommendations are heeded. “If we can’t see that an exam’s been done, we may send a note to the referring physician,” Damiano says. “You can’t always tell. The patient may have had the exam somewhere else.” PSOs PSOs were created under the Patient Safety and Quality Improvement Act of 2005. They are designed to allow the exchange of adverse-event data so that adverse events can be understood and minimized. One advantage of PSOs, which are administered and listed through the AHRQ, is that the data are confidential (and can’t be used as a basis for lawsuits). Adverse events affecting specific patients are still subject to litigation, but the PSO data are privileged and protected. With the exception of health-insurance carriers, any health-care entity that can attest that its goals include patient safety and quality can form a PSO and apply for listing with the AHRQ. The PSO applicant must include a patient-safety work product, which is the information that it wants the PSO to protect. Norman Scarborough, MD, is vice president and medical director of MedSolutions, an RBM that provides—through a subsidiary, Premerus—subspecialized imaging interpretations to clients nationwide. Premerus recently received its PSO listing. Scarborough says that Premerus intends to use its PSO to analyze internal peer-review data, including documentation of interpretation errors. Premerus randomly selects 2% of its cases (with patient-identifiable data removed) to be reinterpreted by a second physician, who issues his or her own report. This report and the original report are compared by a third radiologist, who determines whether there is concordance. If there are discrepancies between the reports, both reports are sent to a committee that determines the correct interpretation—or whether either report is correct. The radiologist who missed the correct interpretation will then be assigned an error score of one (minor) to four (serious). “For the inconsequential errors, we generate letters,” Scarborough says. “The ones with consequential errors get phone calls.” Scarborough adds that the peer-review data from Premerus will constitute its PSO patient-safety work product. All Premerus clients with nonidentifiable patient data in the pool must agree to participate in the PSO, and the PSO provides the peer-review data with a level of protection. The data are treated as privileged, he says, adding, “That generates the ability for us to talk about the data without feeling threatened.” Strategic Radiology Strategic Radiology, LLC, is an alliance of 16 member radiology practices from around the country. Collectively, it has more than 1,000 radiologists under its umbrella. Lisa Mead, RN, MS, CPHQ, is Strategic Radiology’s director of quality and patient safety, and she reports that Strategic Radiology recently registered its own PSO with the AHRQ. Strategic Radiology will use the PSO to aggregate data from its members and improve methodologies for evaluating safety and quality. “We expect that each of the practices will become a provider to the PSO and will look at different projects to send to the PSO to benchmark and improve processes,” Mead says. “It might be radiation dose—across multiple machines and protocols—across the physicians’ practices. We want to see whether the PSO will generate more data and provide some protection for sharing the data.” Mead says that Strategic Radiology hasn’t yet defined its patient-safety work product; if radiation exposure is one of the PSO measurements, Strategic Radiology members will probably take all of their dose protocols and put them on an analytical grid to see which ones yield optimized dose. The best dose protocol will then be adopted by all Strategic Radiology members, she says. “The intent of the PSO is, first, to decide what to measure and then aggregate,” she explains. “We might want to do a root-cause analysis for near misses. The goal is to create a safe way to share information that might be protected.” Mead adds that the PSO is just one more quality-improvement tool in what she calls a giant shift toward defined and measurable processes, whether those involve radiation dose, turnaround time, critical findings, or productivity. “We have to move from ‘I think it’s working’ to ‘I know it’s working,’” she says. “We have a big mission, and if we can use big-data tools to make sure the service is improving for patients, that’s important work.” George Wiley is a contributing writer for Radiology Business Journal.