Measuring Performance in Radiology
Number crunchers at radiology practices might occasionally lose sleep over the complex nature of performance assessment, but that’s nothing, compared with the sleepless nights experienced by women who learn of possible breast abnormalities. In his work as regional radiology department chief at Kaiser Permanente (KP) Colorado in Denver, Greg Mogel, MD, instinctively knows this, and he sets the tone accordingly. It’s not that Mogel neglects the hardcore financial metrics—far from it. It’s just that the 15-year radiology veteran is convinced that the organization can do it all (and do it well). The philosophy is embraced from the top down, thanks to a KP culture that puts sleepless nights on par with productivity-based metrics. The sleepless-night indicator is no mere platitude. It is measured, refined, and reported to top leaders. The clock starts ticking the moment a woman is told her that mammogram is abnormal (or when a palpable abnormality is found), and it runs until the time of the biopsy. With so many moving parts, the sleepless-nights initiative is nothing less than a large interdisciplinary project that involves several populations within the radiology department. To improve overall performance at KP, Alise Vanoyen, MD, mammography section chief; Rachel Biller, CPMG, radiology business manager at KP Colorado; and Mogel examined the situation and recommended behavior changes for radiologists, mammography technologists, schedulers, equipment purchasers, and even equipment distributors. “This initiative has led to new jobs, such as breast coordinators and navigators,” Mogel says. “Sleepless nights is a highly studied metric. It is a number we calculate internally every two weeks and report to the top of the leadership chain.” Reducing Sleepless Nights The potential for delays and long nights is enormous due to the numerous handoffs from discovery to diagnosis. Many of these handoffs occur in the department—and, sometimes, across departments. “We have reduced sleepless nights by about 30% within the department of radiology,” Mogel says. “Did that improve patient care? That is self-evident. It is humane, and the highest value in medicine, to reduce suffering and uncertainty for women in that situation. Does it improve the financial bottom line? Perhaps, but probably not; you could say that as we improve the process for patients, it will amount to good word of mouth for the facility.” According to Biller, schedulers are given scripts with an eye toward reducing the amount of time for call-backs, and this speeds up the second round of imaging after an abnormality is discovered on a screening mammogram. Same-day biopsies are now a regular part of the workflow as a direct result of studying performance-assessment measures. Quickly substantiating the abnormality through additional appropriate imaging is part of that workflow, as is carrying out all compliance work necessary to get orders (up to and including biopsy) from physicians outside the department. “In the past month, we have begun doing an increasing number of same-day or fast-track biopsies,” Mogel reports. “This is something that does not happen very frequently anywhere. We are collecting the data, and we will have numbers to compare our August timing, for example, to our May timing. We follow the numbers closely. Every fast-track biopsy means a woman is only waiting for the pathology report, which is literally out of the hands of the radiology department.” Mogel points out that breast imaging, since it is largely based on a screening modality, is inherently different from other areas of radiology. He believes that much can be learned from mammography’s standardization of follow-up care, as outlined in the ACR BI-RADS® criteria. The role of a mammographer ultimately represents a much more defined and repeatable task than that of a radiologist covering many modalities. “Generalizing performance assessment is difficult across specific tasks within the same radiology department, let alone across different radiology departments with different financial realities, pressures, payor mixes, and responsibilities to the ordering clinicians,” Mogel acknowledges. He continues, “The ACR would like to reproduce the success it has had in the standardization of mammography in CT and MRI. I, too, dream of that day. There currently are no BI-RADS equivalents for chest CT. There is no widely agreed-upon protocol for cardiac MRI. These fields are much more in the formative stages.” Nonetheless, Mogel is convinced that the methods developed at KP are reproducible at other sites, and mammography is a good model to copy, if possible. It comes down to setting priorities. The sleepless-nights measure affects everyone, from senior leaders to every technologist, staff member, and physician. Incentives are applied at every point, and information is collected biweekly on mammogram acquisition to interpretation to call-back to biopsy. Changing individual behaviors allows each individual to see how his or her performance has decreased the overall number of sleepless nights. The performance assessment ultimately shines a powerful light that improves care and seeps into the fabric of the culture. Transforming the Culture Mogel and Biller are looking for ways to reproduce the mammography performance-assessment model in other areas of medical imaging. They don’t claim to have all the answers, but they relish a future that is likely to reveal final metrics that will fully realize the value proposition that radiology departments bring to each organization. Governmental entities have tried to set the bar and establish consistent ways to assess performance, but the effort has its problems. “National approaches to assessing performance are doomed, at least in the current state, to fail,” Mogel says. “The reason is that an academic setting requires different measures. Private practice is different, and a multispecialty outpatient practice such as ours has different measures than an inpatient structure has.” The value proposition of the modern radiology department also continues to change, as do external pressures on radiology as a cost center. Biller observes that not only is there a trend in reduced reimbursement per RVU, but there also is a downward trend in the number of RVUs associated with individual studies, requiring agility in all facets of maximizing reimbursement. Along with RVU-based productivity, Biller keeps track of a wide variety of measures, such as the positive predictive value of mammography, stage 0 and stage 1 breast cancers detected, turnaround times, and satisfaction-survey results for patients and ordering clinicians. Many of these standards have been highlighted by the ACR, but the key for success in day-to-day operations is to keep accurate records routinely and rigorously. “We monitor and drive productivity by creating an environment to foster that productivity,” Mogel says. “We make sure that quality never suffers in the pursuit of productivity.” Mogel says that straight productivity models tend to remove incentives for crucial behaviors such as clinician interaction. “There is danger in using commodity-style productivity measures such as RVUs,” Mogel says. “There are no RVUs assigned to clinician interaction. No RVUs are assigned to helping technologists with patients who are anxious or having a tough time in the MRI scanner.” He adds, “Using commodity-style, RVU-based productivity measures too broadly risks disincentivizing other behaviors that radiologists can do to add value. Ultimately, radiologists’ value is to serve as partners in shared decision making about diagnostics. RVUs don’t capture that.” Because radiology is a service-oriented specialty, Biller emphasizes the importance of identifying the value proposition. “In a capitated environment, reducing utilization of radiology services is obviously valuable,” she says. “In a fee-for-service environment, maximizing use of radiology services is valuable; however, assessing radiologists in those two environments would be different.” Depending on Data As CMIO for Southwest Diagnostic Imaging (Phoenix, Arizona), James Whitfill, MD, provides IT services for Scottsdale Medical Imaging Ltd (SMIL). It’s an enormous task, made all the more difficult by the sheer amount of data generated by SMIL, a busy radiology practice. According to Whitfill, the key to good performance assessment is choosing data that truly influence the patient’s health experience. Do patients end up in the emergency department? Can they get follow-up care from their physicians? “We have outstanding access to tremendous amounts of data,” Whitfill says. “These data are all related to the patient’s radiology experience. Ideally, you would want to match those up with other external sources that tell you how the course of the disease is improving.” Information can generally be found in the electronic health record (EHR), the paper chart, or the hospital information system. Amalgamating the data into a cohesive whole is no easy task, and Lisa Mead, RN, CPHQ, CAO at SMIL, is partially responsible for getting the data to Whitfill. “Tracking the patient’s entire experience, within a closed system, is the only way really to know what is going on,” Mead says. “We all think performance assessment is difficult. It is complex because it is a large umbrella. If you look at all the things that go into quality assessment, it gets back to the performance that you are assessing.” Accurate benchmarking is one way to know whether a department or practice is on track. SMIL relies on Strategic Radiology (St Paul, Minnesota), a group of 16 private-practice, physician-owned radiology groups across the country, to gauge its performance against that of other outstanding groups. SMIL is a member of the entity, which encompasses about 1,000 radiologists (who read millions of examinations per year). Benchmarking helps gauge the business side of the equation, while EHRs are increasingly interacting with the RIS and PACS to make data acquisition more convenient. “It is not enough just to create a report and say, ‘Here’s the truth,’” Whitfill says. “You get reports, you get data, and then you have to understand whether they make sense. Without objective data, everyone thinks his or her patients get the best care, and everybody believes he or she is maximally efficient, really busy, and in need of more resources.” For Mead, more data created more questions and motivated her to look deeper into performance assessment to address new issues. “The first thing I think of is the phones, which encompasses the amount of time on hold and people who hang up,” she says. “Often, you hear that you need more people to answer the phones because people complain that you are not answering and/or they are on hold too long. You really need to look at the information and look at how many people are on the phone. How many calls do they take? What is the length of the call?” She continues, “You can respond by staffing appropriately or reallocating shifts. Look at staffing hours. Look at individual performance. There is a lot you can do with the information to manage a service area of your company better.” Expense management, productivity, throughput, and turnaround time all require complete assessment. With the help of Whitfill, SMIL has gotten a handle on its expenses by focusing on the basics. “The equation, in any business, involves revenue and expense,” Mead says. Priority data under the quality umbrella are gathered in a variety of places, such as the payroll system, the RIS, and the billing system. “We use two tools for most of these reports, to look at the metrics,” Mead says. “These are business-intelligence tools. We analyze the data and find benchmarks.” Whitfill adds, “Without a robust performance-management system, you are making really critical decisions about staff and equipment without the proper data. You are making large decisions based on intuition. The larger your organization, the tougher it can be. Organizations can really drift off track. It’s all about matching resources to demand, and data provide that opportunity in an objective way.” Ongoing Monitoring At SMIL, indicators are tracked on a weekly, monthly, quarterly, and yearly basis. For employees, Mead looks at job satisfaction and performance, which are usually tied to a yearly merit bonus. “We ask our supervisors and staff to meet during the year to give feedback on performance,” Mead explains. “From a clinical perspective, and under the quality umbrella, we look at the quality of images. We get this from feedback from referring physicians and/or comments from our radiologists. They can let the supervisors know if there are any issues with image quality.” Patient satisfaction is always a concern, and Mead spends a lot of time poring over satisfaction surveys. Efficiency improved even more when SMIL surveys became automated, as well as outsourced. The outside firm sent out 1,000 surveys for feedback from referring providers, alleviating the burden on in-house staff and providing a wealth of information. Meanwhile, physicians routinely review samples from each modality under the ACR’s RADPEER™ peer-review program. Results are fed into the ACR database and used to review trends and boost learning opportunities. Mead puts everything under the quality umbrella, with specific emphasis on image quality; satisfaction; risk management; compliance; physician quality, via peer review; and various projects through the accrediting bodies. “The physician peer-review program is probably the most difficult to measure,” Mead says. “There is such a large exam volume in radiology. With so many exams, you must make sure that you are building a just culture that is not based on blame, and that can be difficult. You want to create an environment in which everyone can truly share and learn.” At KP, Mogel also uses RADPEER, which monitors participation by individual radiologists and determines how many colleagues’ cases are reviewed. Interpretations are graded on a scale of 1 to 4, and every month brings a quality-assessment meeting to discuss particular cases. “Many radiology departments are not great at making sure everyone is reviewing enough,” Mogel says. “We make sure everyone is participating equally.” Mogel says that a long-term patient-centered philosophy at KP has made teamwork and high performance entrenched parts of the culture. The standards of the team are high, and while performance-assessment numbers usually back that, the raw statistics don’t always tell the story. “There are shifts and tasks that must be commoditized, but over-reliance on productivity as a measure is a big mistake and has significant unintended consequences,” he says. “Radiology involves shared decision making, utilization decisions, and decisions to reduce radiation exposure. If productivity is relied upon too much, it reduces the radiologist’s role to that of a commodity. That does great damage to the whole specialty, and ultimately, to the people we serve.” Greg Thompson is a contributing writer for Radiology Business Journal.