Q&A: Tanveer Syeda-Mahmood on IBM's Avicenna software

Avicenna, a new software currently in development at IBM, will be able to “read” imaging scans, identify certain problems, and suggest diagnoses using both the patient’s personal data and data from similar cases.

The software is still approximately five years away from being ready for daily use by radiologists, but it provides the industry with yet another example of the future of modern healthcare technology.

Tanveer Syeda-Mahmood, PhD, is a researcher at the IBM Almaden Research Center and has been hard at work on Avicenna for several years now. She spoke to RadiologyBusiness.com on the phone about Avicenna, its capabilities, and what radiologists should expect once the technology is available to the public.

RadiologyBusiness.com: Can you tell us a little bit about how this project first began?

Tanveer Syeda-Mahmood, PhD: Back in 1997, I had my father misdiagnosed with ischemic stroke when he actually had a hemorrhagic stroke and was given a blood thinner, which perhaps led to his coma. That led me to thinking, “is there anything machines could do to alleviate that situation by providing better tools for decision making at the point of care?”

That basically led us to propose a project back in 2005 when electronic health records were just coming on and becoming popular. The idea was that there was a lot of patient information in electronic health records that was pre-diagnosed by other clinicians, so if we could take the current patient data and match it against the patients in the electronic health records, then we could leverage the opinions of other physicians who looked at similar patients and learn what happened to them from the diagnosis, the treatment, and the outcome perspective.

RB: How is Avicenna different from other similar products that may be hitting the market in the near future?

You might see companies that do work in machine learning, you might see a lot of startups that are trying to use deep learning framework to separate normal from abnormal imagery. Our understanding is that many of these networks are learned from general image collections, and using those data sets for medical imaging is not a good idea. To do this right, you should at least use medical images for training such networks. 

We do use deep learning and many other machine-learning techniques, but we have adapted them to work with the kind of data sets we work with. That’s on the technology side.

The three perspectives that make us different are (a) multimodal analytics that we bring to the table that are much more advanced and do unattended DICOM processing from raw image to disease characterization in limited specialties, (b) the large body of clinical knowledge that we have assembled through semi-automatic textual analytics, and (c) the reasoning engine that is guided by knowledge and patient statistics.

Having a system that does all of these things together is something that is more than a cognitive computer-aided diagnosis (CAD).

RB: IBM acquired Merge Healthcare for $1 billion last year, adding billions of medical images to its already-extensive image library. What kind of impact will all of that additional data have on Avicenna?

The approach we have taken is that imaging alone is not sufficient. You need to incorporate evidence from textual reports, structural data, the lab, medications, you name it. So basically, we want to be able to look at a patient holistically.

Imaging is going to be a big part of that, and obviously for algorithms that are machine-learning based, the larger the data set you have for training, the better the performance of the algorithms will get.

RB: What is IBM’s end goal for how this technology will be used by radiologists? 

There are many cases one can think of where assisting alone is sufficient. To give you an example, if you are in an ER situation, if the machine can do the analysis and send it to the relevant radiologist, you save a lot of time and possibly a misdiagnosis. 

You can use this technology at any level of comfort. As its demonstrated capabilities reach higher and higher proficiency levels, you can incorporate it in your workflow in different forms.

You can use it like a consultant ... or this could help you in cases where you need a second opinion if it is outside your domain of expertise where it would be beneficial to know what other clinicians would do. So there are many ways of using this technology to assist you, to speed up your workflow, to take less time to diagnose, or to improve accuracy. 

RB: Is there any concern about what would happen if advanced technology such as Avicenna was blamed as part of a malpractice claim? 

I think it’s important to distinguish that this is not a CAD tool. What we are trying to bring in is a new generation of multimodal informatics-guided systems that goes beyond EMR and PACS and offers a new layer that ties the data from these systems.  

How you use these insights will be up to the clinicians. It's intended to be used in the form of an assistant that augments your thinking, but the decision making is ultimately up to the physician.

RB: Is there anything else you would like to share about this software?

The other thing I would leave you with is we do want to approach this technology with the right attitude and with cooperation from the radiology and cardiology communities. We are there to assist them and give them tools just like any other tools they are using, so they can incorporate them into their workflows. 

The intention is not to replace [physicians], but to augment them to make their workflow easier and reduce the time to diagnose to positively impact patient care.

This text was edited for clarity and space. 

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup