As radiology embraces AI, will the industry learn from CAD’s mistakes?

After the U.S. Food and Drug Administration approved computer-aided diagnosis (CAD) in mammography in 1998 and CMS approved reimbursement for CAD in 2002, there was a lot of optimism about the future of this advanced technology. So what went wrong?

According to a new analysis published in the Journal of the American College of Radiology, it’s important for today’s specialists to understand what led to CAD failing in mammography—there are lessons to be learned that can help artificial intelligence (AI) succeed in the years ahead.

Why CAD failed

Though excitement about CAD’s potential was significant early on, the authors noted that it didn’t take long for radiologists to question the technology’s usefulness. One of the biggest reasons for this was simple: CAD just wasn’t getting the job done.

“Not only did CAD increase the recalls without improving cancer detection, but, in some cases, even decreased sensitivity by missing some cancers, particularly noncalcified lesions,” wrote authors Ajay Kohli, MD, of the Drexel University College of Medicine in Philadelphia, and Saurabh Jha, MD, with the Hospital of the University of Pennsylvania in Philadelphia. “CAD could lull the novice reader into a false sense of security. Thus, CAD had both lower sensitivity and lower specificity, a nonredeeming quality for an imaging test.”

Kohli and Jha added that radiologists tended to treat CAD like a “second reader” or “spell-checker” instead of a legitimate game-changer.

The authors explained that CAD’s “inferior performance” also had to do with the technology’s processing power. Sure, radiologists could compare mammograms to earlier studies—but would their equipment actually allow them to?

Perhaps the biggest thing that led to the failure of CAD in mammography was that its developers used supervised learning, which ultimately leads to decreased specificity.

“In supervised learning, the computer is trained on samples with known pathology (truth) and then tested for its ability to predict the likelihood of malignancies in a test sample (truth and lies),” Kohli and Jha wrote. “Despite the allure of supervision, the pedagogy is not neutral. Because the computer sees more cancers during its training than its test, there is verification bias, and the specificity drifts.”

Why future CAD platforms can still succeed

Kohli and Jha noted that radiologists were using CAD to do what they, in theory, could already do on their own—spot breast cancers that the human eye could see. “The 16 percent of cancers that are missed by radiologists likely reflect limitations in image perception by the human eye,” they wrote. “AI can help if we focus on the unique features of the missed cancers.”

Changing from supervised learning to unsupervised learning can help CAD 2.0 succeed in this way. Instead of looking at the exact same place as a radiologist, AI would be free to use “its own pattern-learning abilities” to find what a radiologist would likely miss.

Something else the AI solution of the future have going for them is improved processing power. The late 1990s and early 2000s weren’t that long ago, sure, but computer power has improved dramatically since then.

“As computer power doubles in short periods of time, CAD 2.0 could be applied to not just one view of the same image, but also into different views, prior images, and even nonimaging data, such as pathology images,” Kohli and Jha wrote.

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup