Why non-radiologists should lead radiology QA efforts

Quality assurance of radiological interpretations might be better left to non-radiologists, according to a group of researchers in Ann Arbor, Michigan, who recently found that one-third of radiology reports are discordant with other specialists’ opinions.

Radiologist QA is typically peer-based and often doesn’t venture outside of the radiology department, first author William R. Masch, MD, and colleagues at Michigan Medicine wrote in the Journal of the American College of Radiology. The method has sustained interobserver concordance rates between 97 and 99 percent—but those numbers have raised some red flags.

“Lack of blinding, hindsight bias, desire to avoid interpersonal conflict and unawareness of what constitutes a clinically significant difference are likely to suppress discordance rates when QA is performed by radiologists for their colleagues,” Masch et al. wrote. “True radiologist discordance rates are likely higher than what is reported in much of the QA literature.”

Part of the draw of intradepartmental quality assurance is its convenience, the authors said. Outside experts don’t need to sacrifice their time—or money—to focus on a specialization that isn’t theirs, and they don’t have the inherent bias that comes with departmental politics. But Masch and his team said that potential for bias outweighs any conveniences.

“Traditionally, radiologist QA has not involved members outside the radiology department,” they wrote. “However, given the central role non-radiologist providers play in therapeutic decision-making, and their potentially lesser interest in internal radiology politics, such providers may be better positioned to assess radiologist performance.”

The research team devised a QA strategy that could be anonymously integrated into a routine hepatobiliary tumor board, which they said allowed other non-radiology professionals to contribute to the QA effort without much added burden. The team created a prospective monitoring and grading system to track the rate and cause of discordant radiologist interpretations as determined by providers in hepatology, medical oncology and radiation oncology.

Of 251 MRIs and 50 CTs presented at the tumor board, the authors found that 30 percent of reports were assigned a discordance, with the majority of those instances noted as minor. Major discordances, defined as interpretations that could affect clinical management, made up 11 percent of the data. The numbers are vastly different from those recorded in peer-to-peer review.

“QA conducted in line with existing tumor boards allows assessment of radiologist performance while minimizing biases intrinsic to unblinded peer-to-peer evaluations,” Masch and co-authors wrote. “The multidisciplinary nature of tumor boards probably improves the determination of what discordances are ‘clinically significant’ and bypasses issues intrinsic to intradepartmental politics.” 

The most common discordances related to either mass size, tumor stage and extent and assigned LI-RADS v2014 score, the authors reported. One radiologist had nearly 12-fold greater odds of discordance, while radiologists presenting their own studies saw 4.5-fold less odds of discordance.

“Because of the relatively frequent occurrence of discordant interpretations assigned by non-radiologist providers in the tumor board setting, this approach can create a robust sample for monitoring individual radiologist outcomes,” Masch et al. said. “Future work should consider testing this approach in other tumor board settings and using the results to target directed quality improvement activities.”

""

After graduating from Indiana University-Bloomington with a bachelor’s in journalism, Anicka joined TriMed’s Chicago team in 2017 covering cardiology. Close to her heart is long-form journalism, Pilot G-2 pens, dark chocolate and her dog Harper Lee.

Trimed Popup
Trimed Popup