While peer-review systems lack efficiency, QA databases can have big impact

The peer-review systems radiologists use today take up too way too much time, according to a recent editorial published in Academic Radiology. Implementing an active quality assurance (QA) database, on the other hand, can be significantly beneficial for any radiology department.

Author Ronald L. Eisenberg MD, JD, department of radiology at Beth Israel Deaconess Medical Center in Boston, began his editorial by asking the reader some questions.

“Does any radiologist actually like peer review?” he asked. “Do radiologists look forward to retrospectively checking a specific number of the cases to determine whether they agree with the previous interpretation of their colleague or, if not, attempt to assess whether an apparent error was something that was understandable or should not have been made?”

Eisenberg said retrospective peer-review systems such as RADPEER are common in radiology, but he sees “little if any evidence” they actually improve the performance of radiologists. To support his point, he looked back at data from his own facility, which has been using both a RADPEER-like peer-review system and an active QA database—which was filled out by both radiologists and referring physicians—during the past 6 years.  

The radiology department peer-reviewed more than 9,400 cases during that time. More than 92 percent of the reviewed cases were category 1, and more than 4 percent were category 2. Meanwhile, more than 2 percent of the cases were category 3, and 0.5 percent were category 4. This, Eisenberg calculated, means an average of 39 different cases had to be reviewed before a category 3 or category 4 error could be uncovered.

And what about the QA database?

“During the same 6-year period, there were 361 cases entered into the QA system with discrepancies attributed to the radiologist, 50 percent more than via the PR system,” Eisenberg wrote. “Although not random, this approach collected a far higher percentage of clinically relevant cases and provides valuable feedback from referring physicians without requiring large amounts of radiologist time.”

Eisenberg also reflected on prior researchers who had touched on issues with systems such as RADPEER before summarizing his own thoughts after working with both a peer-review system and an active QA database.

“The current PR system is a time-consuming exercise that is unpopular among radiologists,” he wrote. “In our experience, one third of discrepancies were related to the presence and degree of pulmonary vascular congestion or pleural effusion, or enlargement of the cardiac silhouette, determinations that have a substantial amount of subjective variability in interpretation. Conversely, what most radiologists would probably consider more objective and potentially serious misses (lung mass/nodule; mediastinal mass; rib or other skeletal lesion, most often metastases; pneumothorax; and abdominal lesions missed on chest computed tomography) were identified in only 16 percent of the category 3 and 4 discrepancies. During the same period, these potentially serious misses represented nearly 60 percent of errors identified by the QA system via submissions by referring clinicians or by radiologists interpreting subsequent examinations or preparing cases for interdepartmental conferences.”

Eisenberg concluded his editorial by saying radiology departments should all consider using “a robust online QA system” in addition to their current peer-review practices. Using them together, he said, help the entire department learn from past mistakes and prepare for things they may encounter moving forward. 

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup