Why radiologists should think twice about RADPEER and consider nonrandom peer review

Adopting a nonrandom peer review process—and abandoning the ACR’s widely accepted RADPEER approach—could identify far more diagnostic errors in imaging studies and afford radiologists an opportunity to learn from theirs and their peers’ mistakes, researchers suggest in the current edition of the Journal of the American College of Radiology.

“Random peer review programs are not optimized to discover cases with diagnostic error and thus have inherent limitations with respect to educational and quality improvement value,” first author Jason N. Itri, MD, PhD, of the University of Virginia Health System, and colleagues wrote in JACR. “Despite research on diagnostic error in radiology dating back as far as the 1940s, there has been a disappointing lack of progress in effective strategies to reduce or mitigate the negative impact of radiological errors.”

Imaging discrepancy rates range anywhere from 3 to 37 percent today, Itri et al. said, and those errors often stem from latent conditions, active system failures and a handful of factors that reflect human cognitive biases. Peer review was instated to reduce those diagnostic errors—but it’s not doing as much as it could.

The authors said the American College of Radiology’s RADPEER program is the most commonly used peer review process in the U.S., but its lack of blinding, poor interrater reliability and random case sampling mean the program is inefficient and likely to miss dozens of errors.

“Nonrandom peer review processes can be designed to identify cases with educational or performance improvement value without many of the limitations associated with RADPEER and other random peer review processes,” Itri and co-authors wrote. “Developing a process to collect cases of diagnostic error through these various routes has the potential to substantially increase the number of educational error cases and provides opportunities to identify patterns among these cases that are otherwise not possible through random peer review processes.”

For their research, the authors compared a year’s worth of diagnostic error cases interpreted and peer-reviewed by 10 radiologists in an abdominal imaging department to a year’s worth of nonrandom peer reviews in the same department. The former method was modeled after RADPEER, while the latter involved radiologists submitting cases with diagnostic errors and presenting them at a recurring peer learning conference.

“The purpose of the conference was to analyze peer-reviewed error cases and identify underlying causes of the errors to inform practice quality improvement efforts,” Itri et al. said.

Nearly 1,700 cases were accrued in the random peer review process, the researchers reported, of which 97 percent were deemed to have no discrepancies. Just 44 cases were scored as having minor discrepancies, while no significant or major issues were found during the study period. On the other hand, the nonrandom process collected 190 cases, of which 60 were scored as having minor errors, 94 were found to have significant errors and 36 were found to have major errors.

The nonrandom process allowed for more informed review, the authors said, and provided evidence for a system that would place more emphasis on educational and performance improvement and less on punishing radiologists for their errors. It also made way for more complex reviews of CT and MRI, which are often skipped by radiologists since they’re more time-consuming.

“By removing several of the major barriers to reporting cases with diagnostic error using the random approach and developing a process whereby everyone can learn from each other’s mistakes, we believe we were able to achieve the stated goal of peer review: performance improvement through education and systematic interventions,” Itri and colleagues wrote. “Radiology departments should consider transitioning way from RADPEER as the primary form of peer review by exploring other approaches to identify and learn from diagnostic errors.”

""

After graduating from Indiana University-Bloomington with a bachelor’s in journalism, Anicka joined TriMed’s Chicago team in 2017 covering cardiology. Close to her heart is long-form journalism, Pilot G-2 pens, dark chocolate and her dog Harper Lee.

Trimed Popup
Trimed Popup