Researchers use machine learning to detect fractures in plain radiographs

Machine learning using deep convolutional neural networks (CNNs) can be used to detect fractures in plain radiographs, according to a new study published in Clinical Radiology.

A team of researchers from the U.K. taught the CNNs using lateral wrist radiographs performed at a single facility from January 2015 to January 2016. Each image was classified as “fracture” or “no fracture” based on the existing radiology report. The distinction was personally verified by a human specialist before data was used to “train” the CNN.

Overall, the area under the receiver operator characteristic curve (AUC) was 0.954, a number the authors said provided a proof of concept.

“An AUC of 0.954 for this model demonstrates that transfer learning from deep CNN, pre-trained on non-medical images, can be successfully applied to the problem of fracture detection on plain radiographs,” wrote authors D.H. Kim and T. MacKinnon of the Royal Devon and Exeter Hospital in Exeter, England. “This level of accuracy surpasses previous computational methods for automated fracture analysis based on methods, such as segmentation, edge detection, and feature extraction.”

The study shows how artificial intelligence (AI) technologies can help radiologists address issues within their group, the authors added.

“This level of accuracy could be very useful in applications, such as workflow prioritization and minimization of error,” they wrote. “For example, a similar system could automatically assess the numerous unreported studies languishing in radiology silos for the presence of pathology. If positive, these studies could then be flagged up to the radiology team, thus expediting the identification of potentially serious diagnoses and improving the productivity of the radiologist.”