AI uses mammography data, health records to predict breast cancer

Researchers have developed an algorithm that predicts breast malignancy using a patient's imaging results and detailed health records, sharing their findings in Radiology. Both machine learning (ML) and deep learning (DL) were used to build the algorithm.

“We hypothesized that a model combining ML and DL can be applied to assess breast cancer at a level comparable to radiologists and therefore be accepted in clinical practice as a second reader,” wrote Ayelet Akselrod-Ballin, PhD, from the department of healthcare informatics at IBM Research and the University of Haifa in Israel, and colleagues. “The purpose of our study was to evaluate the performance of an ML-DL model for early breast cancer prediction when applied to a large linked data set of detailed electronic health records and digital mammography.”

The authors collected more than 52,000 images from more than 13,000 women who underwent digital mammography from 2013 to 2017 at one of five imaging facilities. At least one year of health records from before the mammogram were required for the patient to be included in this specific cohort.

The team’s algorithm was then trained on more than 9,000 matching sets of mammograms and health records, using data from both to predict biopsy malignancy and differentiate a normal and abnormal examination. Next, it was validated with data from more than 1,000 women and tested on more than 2,500 women. Overall, the algorithm identified 48% of false-negative findings on mammograms. For predicting biopsy malignancy, it was found to have an area under the ROC curve (AUC) of 0.91, specificity of 77.3% and sensitivity of 87%.

The algorithm was also compared to the Gail Model, a previously existing breast cancer risk assessment tool. When trained on just clinical data, the new algorithm had much higher AUC (0.78 compared to 0.54) than the Gail Model.

Though the team’s model did not necessarily outperform radiologists, its performance did fall in “the acceptable range of radiologists for breast cancer screening.” This, the authors explained, shows significant potential for helping healthcare providers.  

“The model did not perform better than radiologists, it performed differently,” the authors wrote. “In a scenario where double reading at screening mammography is not available…we believe that the use of this model as a second reader could be beneficial.”

Akselrod-Ballin et al. noted that there are still ways their algorithm could be improved. Training it to compare findings to previous mammograms and use of ultrasound images, for instance, could “further improve the ML-DL model’s performance.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup