Thought leaders within radiology largely agree that the specialty is in a unique position to help lead the implementation of artificial intelligence (AI) into clinical practice. But how, exactly, does that happen? Two representatives from the American College of Radiology (ACR) Data Science Institute (DSI) executive team wrote about that very topic in a new commentary published by the Journal of the American College of Radiology.
“For the most part, individual AI software developers are currently working with individual radiologists at single institutions to create AI algorithms that are focused on targeted interpretive needs,” wrote authors Bibb Allen, MD, with the department of radiology at Grandview Medical Center in Birmingham, Alabama, and Keith Dreyer, DO, PhD, with the department of radiology at Massachusetts General Hospital in Boston. “These developers are using a single institution’s prior imaging data for training and testing the algorithms, and the algorithm output is specifically tailored to that site’s perspective of the clinical workflow. It remains to be seen whether the effectiveness of these algorithms will be generalizable to widespread clinical practices and how they will be integrated into clinical workflows across a variety of practice settings.”
Allen and Dreyer went on to detail some of the many challenges facing today’s specialists as they work to advance AI into clinical practice. Defining and prioritizing use cases, for instance, are perhaps the biggest challenges of them all. Making sure algorithms are effective with a wide variety of patients, practice types and workflows, and monitoring that effectiveness over time, is another significant mountain radiology must climb to truly lead healthcare forward into this brave new world.
The ACR DSI, Allen and Dreyer explained, was formed with these issues in mind and “has developed a foundational framework for AI to improve clinical care and ensure these algorithms can be safely deployed and monitored in clinical practices on a large scale.”
Developing AI use cases is the key to moving forward
“This AI ecosystem for medical imaging and the other radiological professions outlines a standardized pathway for developers to create and mature algorithms from idea to widespread clinical use,” the authors wrote. “We believe developing AI use cases that follow this path will be the best way radiological professionals can influence the development of AI models that will have the greatest benefit to our professions and our patients.”
One of the ACR DSI’s primary goals is assisting with the creation and application of AI use cases that apply “AI to radiology and the radiological sciences.” These use cases typically begin in an ACR DSI Data Science Subspecialty Panel, where real-world clinical problems are identified. Then “data engineers, statisticians and the ACR DSI staff” go to work converting those problems into “machine readable frameworks” so that the actual algorithms can be developed and, eventually, validated.
Allen and Dreyer emphasized the importance of workflow in this process. If the algorithm being developed doesn’t integrate into a practice’s reporting software or PACS, it defeats the entire purpose of developing these algorithms in the first place.
“Ensuring that algorithms can be integrated into radiology professionals’ clinical workflow is of paramount importance to the DSI Data Science Subspecialty Panels because if the AI tool is not readily available to the end users in our workflow, adoption in clinical practice will be less likely to occur,” the authors wrote.
Another key step is monitoring that the algorithms work as intended. Use cases developed by the ACR DSI address this from the very beginning, Allen and Dreyer explained, by building a feedback system into the algorithm itself. Developers, radiology practices and even regulatory agencies can then see the effectiveness of the algorithm for themselves.
“Using this type of real-world data has broad appeal among many stakeholders including the FDA, which has selected ACR DSI Lung Cancer Screening Use Case as a demonstration project for a National Evaluation System for Health Technology for the validation of algorithms and monitoring algorithm performance in clinical practice,” the authors wrote. “With this feedback in hand, developers will be able to continuously improve the effectiveness of the AI algorithms.”