Integrating a tool into the picture archiving and communication system (PACS) may help to bolster radiologists’ ability to pinpoint discrepancies in imaging studies.
That’s according to a new study conducted at Loyola University Medical Center and published Dec. 17 in the American Journal of Roentgenology. Researchers note that RADPEER, the current system for such reviews, has several limitations that disrupt workflows and allow bias to seep in. Wanting to test an alternative, researchers deployed a peer review tool that included machine randomization to help reduce any preferential treatment in reviews.
The model has shown early promise, producing a five-fold increase in radiologists’ reported rate of finding significant discrepancies, wrote Joseph Yacoub, MD, with the department of radiology at Medstar Georgetown University Hospital, in Washington, D.C., and colleagues.
“This underscores the concerns about underreporting in RADPEER and highlights the effect of the implementation of peer review on the outcomes of the process,” the analysis concluded.
Yacoub et al. reached their conclusions by testing their review method during an eight-month period at Loyola, located in Maywood, Illinois. A group of 24 radiologists comprised of three subspecialties—musculoskeletal, body imaging and neuroradiology—used the new review method for four months, which researchers then compared to the previous four months before implementation. A control group of 14 radiologists— mammography, interventional radiology, and nuclear medicine—meanwhile continued reviewing studies directly in RADPEER.
The authors found that the mean significant discrepancy rate reported by each radiologist went from 0.19% ± 0.46% before the implementation up to 0.93% ± 1.45% after implementation. Nonadopters did not pinpoint any signficant discrepancies during the study.
Researchers offered three potential reasons for the improvement—(1) integration into the PACS made this work easier, requiring far less mouse clicks; (2) the tool offered a degree of randomization, preventing radiologists from cherry-picking which cases to peer review; and (3) it offered blinding, too, by omitting a clinician’s name and minimizing risk of bias.
Yacoub believes the analysis lends more credence to concerns about underreporting of errors in RADPEER and the need to avoid punishing radiologists based on the system’s findings. Another study published last year in the Journal of the American College of Radiology reached similar conclusions.