Discussions about machine learning’s impact on radiology might begin with image interpretation, but that’s only the tip of the iceberg. When it comes to realizing the technology’s full potential, it’s like Bachman Turner Overdrive sang many years ago: You ain’t seen nothing yet.
The authors of a new analysis published in the Journal of the American College of Radiology wrote at length about the many applications of machine learning.
“Machine learning has the potential to solve many challenges that currently exist in radiology beyond image interpretation,” wrote lead author Paras Lakhani, MD, department of radiology at Thomas Jefferson University Hospital in Philadelphia, and colleagues. “One of the reasons there is great excitement in radiology today is the access to digital Big Data. Many institutions have implemented electronic health care databases over the past two decades, including for images in PACS, radiology reports and ordering information in Radiology Information Systems, and electronic health records that encompass information from other sources, including clinical notes, laboratory data and pathology records. Moreover, radiology images themselves are rich in metadata stored in the DICOM format, which may be leveraged as well. As such, there are great opportunities to uncover complex associations within the data using machine learning that would otherwise be difficult for a human to do.”
These are some of the many examples Lakhani et al. provided of how machine learning can be used in radiology beyond image interpretation:
1. Creating safety protocols
Developing safety protocols is an important part of any radiologist’s job, the authors noted, and machine learning can help speed up the entire process.
“This can be a time-consuming but important task,” the authors wrote. “However, recent studies demonstrate that machine learning algorithms utilizing information extracted from the provided study indications can be accurate in determining protocols of studies in both brain and body MRIs.”
2. Decreasing radiation dose in CT
Decreasing radiation dose is a huge topic in medical imaging. Lakhani et al. noted that deep learning has the potential to help specialists lower dose without the usual tradeoff of producing “poorer-quality images.”
“The idea is to train a classifier to map ‘noisy’ images generated from ultralow-dose CT protocols to high-quality images from regular protocols, using deep learning techniques,” the authors wrote. “This is akin to creating ‘super-resolution’ photorealistic images from down-sampled versions, which has already shown exciting results in every-day color images.”
3. Decreasing scan time in MRI
This is similar to what machine learning is capable of with CT images and just as important. MRIs take a long time to acquire, the authors noted, meaning that patients are being scanned for longer periods of time. With deep learning however, anatomic MRIs can be reconstructed using raw data from the scanner, resulting in a drop in scan time of 50 percent or even more.
4. Scheduling patients
Scheduling patients is never anyone’s idea of a good time, especially for larger practices. “Many factors are involved, including time of day, day of the week, coverage location (emergency department, inpatient, outpatient), exam complexity, volume of studies, variety of modalities (MR, CT, ultrasound, x-ray), and referring clinician practice patterns,” the authors wrote. “Sometimes radiologists feel overwhelmed with the volume and complexity of studies on a given shift, and other times they are overstaffed.”
Machine learning is already being used in other industries, so why not radiology as well?
5. Billing and collections
Natural language processing and machine learning can help practices interpret reports more accurately and produce better claims, the authors explained. Insurance denials are said to cost providers 3 to 5 percent, so using these evolving technologies to try to get some of that money back is a plan executives are likely to get behind.
6. Image-based search engines
Deep learning has the potential to make text searches a thing of the past.
“One could input an image of the lungs containing a ground-glass opacity and see other CT scans containing similar findings, matched with corresponding radiology reports (or even pathology if known),” the authors wrote. “This technology could augment electronic teaching files and result in diagnostic assistance technologies.”