Machine learning provides radiologists a cost-effective assist to deduce lung cancer severity from CT

A team of scientists from several disciplines including radiology have devised a machine learning method that they say can cost effectively bolster lung CT interpretation.

Stanford University experts call their new neural network “LungNet” and say the system has delivered consistent, fast and accurate interpretations of computed tomography scans. Their work—funded by the NIH’s National Institute of Biomedical Imaging and Bioengineering—may eventually help physicians find details easily missed by the human eye while also minimizing reader variability.  

“Quantitative image analysis has demonstrated that radiological images, such as CT scans of patients with lung cancer, contain more minable information than what is observed by radiologists,” lead researcher Olivier Gevaert, an assistant professor of medicine in biomedical informatics research at Stanford, said in an NIBB statement issued June 24.

Gevaert and colleagues trained LungNet using imaging data from four independent cohorts of more than 700 patients with non-small cell lung cancer, which accounts for 85% of all such malignancies. The machine learning tool was able to predict overall survival rates across all four cohorts at four different hospitals, classify benign versus malignant nodules, and stratify based on cancer progression.

“This is an outstanding example of how machine learning technology can be a cost-effective approach to advance disease detection, diagnosis and treatment,” added Qi Duan, PhD, director of the NIBIB Program in Image Processing, Visual Perception and Display.

Their work was fueled by a $444,000 grant from the National Institutes of Health and was recently highlighted in the journal Nature Machine Intelligence.