A machine learning-based “red dot” triage system could help differentiate between normal and abnormal chest radiographs while optimizing clinician workflow, British researchers reported this month in Clinical Radiology.
E.J. Yates, first author of the paper and a Foundation Doctor in the West Midlands, England, said demand for radiography services has been on the rise in recent years, but those demands aren’t necessarily centered around fresh advances in the field. Instead, Yates and colleagues wrote, chest radiography remains the most requested imaging service in the U.K.
“Timely radiologist reporting of every film is not always possible, leading to a backlog of unreported studies,” the authors said. “In order to deliver maximal patient benefit from this reporting backlog, a system of image abnormality prioritization would be beneficial, allowing reporting to first focus on examination of pathology over normality.”
The researchers tackled this with a “red dot” approach—one that’s been around for more than four decades and traditionally involves marking abnormal plain film with circular, red stickers. The method has historically achieved 78 percent sensitivity and 91 percent specificity across pooled chest and abdominal exams, Yates et al. said, but it doesn’t relieve the issue of backlog in the reporting room.
“Existing machine learning approaches have typically focused on formal reporting of pathology or multi-label classification, rather than prioritization based on abnormality,” the authors wrote. “The aim of the present study was to explore the ability of a deep learning, computer vision approach in binary normality classification of plain film chest radiographs to serve as a rapid screening tool.”
Yates’ team applied the red dot system to machine learning using deep convolutional neural networks (CNNs), according to the study. They used an open-source machine learning library known as Tensorflow to retrain the final layer of a deep CNN to perform binary normality classification on two public image datasets, and images were digitally marked with the words “red dot” when identified as abnormal. Retraining was applied to nearly 50,000 images, with validation testing on 5,505 previously unseen radiographs.
The final model achieved 94.6 percent accuracy, Yates and colleagues reported, with a similar sensitivity and specificity of 93.4 percent. Positive predictive value reached 99.8 percent, though the system did yield a false-positive rate between 3 and 13 percent.
The authors said the results are encouraging, but the current system is limited in that it prioritizes only abnormalities, meaning the absence of a “red dot” indicator on an image doesn’t promise normality.
“Although further work is required to validate the application of such models to real-world datasets, the present study adds to existing literature in proposing the application of deep machine learning to the rapid automatic detection of abnormality in radiological imaging,” Yates and co-authors said. “Machine learning-based screening on radiology datasets may, therefore, have an exciting future in optimizing clinician workload in the face of expanding demand.”