Machine learning techniques perform well when tasked with predicting malignancy in breast lesions identified during breast cone-beam CT (CBCT) exams, according to a new study from German researchers published by the American Journal of Roentgenology. One technique, back propagation neural networks (BPN), outperformed two radiologists.
The authors used five different machine learning techniques—BPN, random forests, extreme learning machines, support vector machines and K-nearest neighbors—to diagnose breast lesions identified during contrast-enhanced breast CBCT exams of 81 suspicious lesions found in 35 patients. Two experienced radiologists analyzed the same data.
Overall, BPN produced an area under the curve (AUC) of 0.91, sensitivity of 0.85 and specificity of 0.82. One radiologist had an AUC of 0.84, sensitivity of 0.89 and specificity of 0.72, while the other radiologist had an AUC of 0.72, sensitivity of 0.71 and specificity of 0.67.
“Our study shows that machine learning techniques provide high and robust diagnostic performance in malignancy prediction of breast lesions identified at CBCT,” wrote lead author Johannes Uhlig, MD, MPH, of University Medical Center Goettingen in Goettingen, Germany, and colleagues.
Uhlig and colleagues broke down one possible reason for BPN’s strong performance.
“One reason for the superior performance of BPN in our study might be the hidden layer architecture allowing the machine learning technique to amplify important input variables and, at the same time, suppress irrelevant information, the authors wrote. “Potentially, with more hidden layers, the BPN diagnostic performance would have been even higher, while at the same time requiring additional computational power.”
The authors also noted that the radiologists used in their study had a great deal of experience, making it even more noteworthy that BPN performed so well.
Looking at the performances of the other machine learning techniques, support vector machines had an AUC of 0.90, a sensitivity of 0.90 and a specificity of 0.79. Random forests had an AUC of 0.87, a sensitivity of 0.87 and a specificity of 0.72. K-nearest neighbors had an AUC of 0.82, a sensitivity of 0.82 and a specificity of 0.63. Extreme learning machines had an AUC of 0.77, a sensitivity of 0.81 and a specificity of 0.63.