Accurate AI: Machine learning models identify findings in radiology reports

Machine learning models can identify key information in radiology reports with significant accuracy, according to a new study published in Radiology.

The authors used more than 96,000 head CT reports for their research, turning to a bag-of-words (BOW) model to label a small subset of the reports and then allowing the trained algorithms to do the rest. BOW was used because it “discards grammar and context” and “utilizes document-level word occurrences as its features.”

Overall, machine learning algorithms were able to successfully identify findings in the reports. The model with the best results had a held out area under the receiver operating characteristic curve (AUC) of 0.966 for identifying critical head CT findings and an average AUC of 0.957 for all head CT findings. Sensitivity for identifying critical findings was more than 92 percent, and specificity was more than 89 percent.

“The method described in this article can be used to generate a large set of automatically labeled radiology reports based on a small set of manually labeled reports,” wrote Eric Karl Oermann, MD, Icahn School of Medicine at Mount Sinai in New York City, and colleagues. “Because these labels are based purely on the text contained in the reports and do not incorporate any machine-derived features learned from the corresponding imaging, they can be used as an independent set of labels to train deep learning–based image classification models.”

The authors noted that this success came, at least in part, as a result of the “highly structured language of radiology.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup