AI in radiology: 3 key ethical issues the specialty must address

As the influence of artificial intelligence (AI) continues to grow in radiology, the specialty must come together to re-examine its ethics and code of behavior, according to a new commentary published in the Journal of the American College of Radiology.

“This code must be continually reassessed as these systems become more complex and autonomous,” wrote authors Marc Kohli, MD, of the University of California, San Francisco, and Raym Geis, MD, of National Jewish Health in Fort Collins, Colorado. “Radiologists and developers of autonomous and intelligent systems have a duty to follow rules and principles that result in the best outcomes for patients.”

In their analysis, Kohl and Geis separated the ethical issues surrounding AI into three categories:

1. Data ethics

The ethics of data, the authors explained, includes everything from informed consent to the ownership of patient data. For instance, radiology is now in a place where it needs to track patient data over time—but how can that happen when the patient’s identity is scrubbed from that data?

“Should there be new ways to balance patient privacy and the opportunity to develop better health care for all?” the authors asked. “We need to consider the shifting balance between maintaining personal information privacy and advancing the frontier of intelligent machines. One possibility is for each patient to sign a data use agreement with any third-party entity that contributes to his or her digital health record. This could document data quality, security, and use for both data contributors and data users.”

The specialty must also work to address the ownership of intellectual property generated by analyzing patient data over time. If researchers from one academic medical center use a data set built by a different group of researchers to create a new algorithm, who owns that algorithm? The team that collected the data? The team that developed the algorithm?

2. Algorithm ethics

AI algorithms present radiologists with a whole new own set of ethical concerns to sort through. One of the biggest things to consider, Kohl and Geis noted, is the overall safety of an algorithm. Will an algorithm remain safe and secure for its entire lifetime? Who can verify that safety?

Two other important aspects of algorithm ethics are value alignment—will the algorithm truly work to improve patient outcomes?—and transparency.

“In many machine learning models, although it is easy to see what the algorithm is doing, it may not be easy to understand why it comes to a conclusion,” the authors wrote. “Just as root-cause analysis is a critical tool for patient safety, if an artificial intelligence system causes harm, we need to understand why. New procedures for analysis will need to be developed, with the black-box nature of algorithms taken into account.”

3. Practice ethics

As much as AI evolves in the coming years, it still takes people and organizations to research, design, implement and maintain these advanced algorithms. Kohl and Geis explained that policies must be in place at a practice level that promote progress and protect the rights of every individual affected by AI. They also noted that imaging leaders should learn from mistakes made by the social media platform Facebook.

“It will be challenging to anticipate how these rapidly evolving systems may go wrong or could be abused,” the authors wrote. “As the recent Facebook saga demonstrates, inadequate policies and unintended sharing of secrets are real-world concerns.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup