AI can help radiologists learn from past, provide better care in future

Machine learning models can be trained to learn from how radiologists make decisions when interpreting screening mammograms, according to a new study published in the Journal of Digital Imaging. Such research may have a significant impact if used to train specialists.

“In this study, we focused on evaluating whether a machine can be taught about radiologists’ attentional levels and their interpretation of mammograms,” wrote Suneeta Mall, University of Sydney in Lidcombe, Australia, and colleagues. “The reasoning behind our pursuit is to explore the intricacies of search, perception and cognition errors, and, using this acquired knowledge, build customized training programs that can provide versatile information leading to enhanced learning experience.”

The study included data from 120 mammograms captured using full-field digital mammography. A team of eight radiologists read each exam in a random order wearing a head-mounted device that tracked their eye movements using an infrared beam. A separate radiologist, using pathology reports and additional imaging examinations, established a ground truth to help researchers make comparisons.

Mall et al. then developed numerous ConvNet architectures to focus on for this specific study, testing their performance with and without transfer learning. Overall, the team found that its systems were a success.

“Our ensembled deep-learning network architecture can be trained to learn about radiologists’ attentional level and decisions,” the authors wrote. “High accuracy and high agreement between true and predicted values in such modelling can be achieved.”

Transfer learning, the researchers added, improved the performance of the team’s machine learning models.

Where the team’s models fell short, however, was determining the types of cancers that radiologists missed. One possible reason for this inconsistency is that “dwell time”—the amount of time radiologists are looking at a missed cancer—does not provide researchers with enough helpful information.

So how can these AI models be used in training? Mall et al. provided some key context.

“We hypothesize that the focus of training should go beyond understanding characteristics of malignancies and learning to differentiate cancer from other abnormalities (or normality),” the authors wrote. “We believe that these training and teaching programs should bring more transparency in how radiologists interact with mammograms—enabling radiologists to be more conscious about their attentional deployment and assisting them to develop an efficient visual search strategy from early on.”