Automated feedback helps radiologists learn from pathology results

Would an automated radiology-pathology feedback tool provide value for radiologists? Researchers developed one and studied its effectiveness, sharing their findings in the Journal of the American College of Radiology.

The feedback tool was designed to automatically send radiologists relevant pathology results through a secure email and an integrated PACS module. It uses an algorithm to match radiology reports with various pathology, cytology and autopsy results, delivering relevant results directly to the radiologist. Users then mark if the radiology-pathology correlation was concordant, discordant or not applicable.

The study’s authors saw this as a possible way to reduce costly medical errors and boost radiologist performance.

“Continuous, high-quality feedback allows radiologists to determine their own diagnostic accuracy, to evaluate reasons for any discordant results, and to provide an opportunity to learn from mistakes and improve their skill,” wrote author Ankur M. Doshi, MD, department of radiology at NYU Langone Medical Center in New York City, and colleagues. “Because discordances could also be related to pathology error or biopsy sampling error, this tool provides an opportunity to promptly recognize and address these situations to improve patient care.”

Doshi et al. selected five fellowship-trained abdominal radiologists with various levels of experience to participate in the study. The specialists went back and evaluated their own pathology results from August 2017 to June 2018 for abdominopelvic ultrasound, CT and MRI examinations, determining if the correlation was concordant, discordant or not applicable in each instance.

“Because imaging is known to have limitations for the detection and characterization of certain pathologies, we previously distributed a document to guide abdominal radiologists on how to mark concordance for common situations in which this may occur,” the authors wrote. “For example, radiologists were instructed to mark a pathology specimen reporting a colon polyp as 'not applicable' after interpretation of a routine abdominal CT on an unprepped colon that was not performed as a CT colonography.”

Overall, the radiologists noted 234 total discordant correlations, a mean of 46.8 per participant. More than 70% of the time, the radiologists reported, the pathology result would not have been followed up on if the automated alert did not exist. For more than 4% of the correlations, the discordant results required additional action from the radiologist, actions such as a discussion with the pathologist or the patient’s referring provider.

In addition, more than 93% of the discordant results promoted the radiologist to review the imaging results again. And 88% of the time, the discordant result led to a “learning opportunity” for the radiologist. Participants were also asked if each discordant result would have an impact on interpretations they made in the future. The answer was “definitely” 8.6% of the time, “probably” 30% of the time, “unsure” 30.9% of the time, “probably not” 21.9% of the time and “definitely not” 8.6% of the time.

“Our results show that an automated radiology-pathology feedback tool offers a valuable educational opportunity for radiologists,” the authors wrote. “The majority of discordances in this study would not have been manually followed up by the radiologist and likely would never have been discovered without the automated radiology-pathology feedback module."

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup