Why radiologists won’t be replaced by deep learning

As researchers continue to make significant advances with AI and deep learning, has the time come for radiologists to be concerned about their jobs? According to a new commentary published in the Journal of the American College of Radiology, the technology has too many shortcomings right now to put the specialty’s future at risk.

“The recent success of deep learning has challenged previously held assumptions about the limitations of automation to the extent that many of the so-called cognitive fields (eg, medicine) that were once thought to be safely insulated now find their futures uncertain,” wrote Alex Bratt, MD, department of radiology at Stanford University School of Medicine. “Among all of these fields, diagnostic radiology has been singled out as being particularly susceptible to the effects of automation, having reached a new level of infamy as the poster child for the next wave of technological unemployment.”

But deep learning won’t be replacing radiologists anytime soon, Bratt explained, and one key reason for this is that deep neural networks (DNNs) are naturally limited by “the size and shape of the inputs they can accept.” A DNN can help with straightforward tasks reliant on a few images—bone age assessments, for instance, but they become less useful as the goal grows more and more complex. This limitation, Bratt explained, is related to the concept of long-term dependencies.

“To understand the terminology, imagine that you are reading a novel, and there is a character who is mentioned a few times in chapter one but not again until chapter 10,” he wrote. “You recognize the character because she is part of a mental image containing the setting and all of the prior events that led to this part in the novel. The fact that you can integrate information from the beginning and end of such a large block of text into one cohesive mental model means that you are good at long-term dependencies.”

In radiology, long-term dependencies are an issue for DNNs because they are unable to process the required data. A human radiologist, on the other hand, is unrestricted in this way and can look over prior imaging results, large datasets and detailed clinical notes all while making a single assessment.

Another issue related to DNNs is how easily they can fall apart when introduced to small changes. A DNN can be working perfectly after being trained on one institution’s dataset, for instance, but its performance suffers when it is introduced to new data from a new institution.

“This again reflects the fact that ostensibly trivial, even imperceptible, changes in input can cause catastrophic failure of DNNs, which limits the viability of these models in real-world mission-critical settings such as clinical medicine,” Bratt wrote.

Bratt concluded by noting he welcomes advancements in AI “with open arms,” but it is simply too early for the specialty to be concerned about being replaced by these technologies.

“As radiologists, it behooves us to educate ourselves so that we can cut through the hype and harness the very real power of deep learning as it exists today, even with its substantial limitations,” he wrote. “To channel Mark Twain, the reports of radiology’s demise are greatly exaggerated.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup