Researchers have developed an algorithm that identifies if follow-up imaging recommendations are adhered to or not, sharing their findings in the Journal of Digital Imaging.
The study’s authors—representatives from Philips, Harvard Medical School in Boston and the University of Washington in Seattle—noted that ignored follow-up imaging recommendations can lead to “delayed treatment, poor patient outcomes, complications, unnecessary testing, lost revenue, and legal liability.” But keeping track of if their recommendations are listened to or not can be challenging for a radiologist.
“With the exception of specific screening programs (e.g., mammography), radiology departments often do not have a means to automatically track which patients have been recommended for follow-up imaging and, more significantly, may not know if patients have scheduled and completed the appropriate follow-up imaging study in a timely manner,” wrote lead author Sandeep Dalal, Philips Research North America, and colleagues. “Even when the patient has completed the follow-up imaging study, this information is not typically explicitly recorded in any of the hospital systems, such as the electronic health record or radiology information system.”
Dalal et al. worked to develop a machine learning- and natural language processing (NLP)-based algorithm that determines a study’s most likely follow-up examination based on “recommendation attributes, study meta-data and text similarity of the radiology reports.”
The team had previously developed a detection algorithm that could distinguish between nine different types of recommendations found in radiology reports. For this study, that algorithm was used on a total of 564 imaging examinations, with mammograms excluded because “their follow-up is well prescribed.” A team of three radiologists then reviewed recommendations extracted by the algorithm, noting that five of them were not valid. A total of 8,691 subsequent reports were then identified that correlated to the final 559 examinations, and that same team of radiologists was used to establish a ground-truth dataset.
Next, the algorithm was developed using both machine learning and NLP. It determined that 387 of the 559 source examination has a subsequent follow-up examination; the other 172 were never properly followed up. Overall, the algorithm had an F-score of 0.807. The inter-radiologist F-score ranged from 0.853 to 0.868, so these findings are viewed as “acceptable.”
“The follow-up matching algorithm provides a means for radiologists and radiology department administrators to determine if patients have completed the clinically appropriate follow-up imaging study,” the authors wrote. “A system to proactively send reminders to referrers, primary care providers, and patients can ensure that patients receive the timely care they need.”