Ricardo Otazo, PhD, and Daniel Sodickson, MD, PhD, of the NYU Langone Medical Center department of radiology, were recently awarded a grant of approximately $3 million by the National Institutes of Health (NIH) to work toward reducing the radiation dose of CT scans by as much as 90 percent.
In their research, Otazo, assistant professor of radiology, and Sodickson, director of the school’s Center for Biomedical Imaging, plan to use advanced mathematical algorithms and a collimator to disrupt X-ray beams before they reach the patient, leading to less radiation reaching the patient.
Otazo and Sodickson both spoke with Radiology Business over the phone to provide some insight into their research.
Congratulations on receiving approximately $3 million from the NIH. Can you please touch on the overall importance of reducing radiation dose?
Ricardo Otazo: Radiation dose is probably the dominant issue in CT today. Dose in individual scans may not be that high, but the acclimated dose over consecutive scans might elevate the risk of acquiring cancer. This is creating a negative attitude in CT patients, and this is concerning when you think about children.
|Daniel Sodickson, MD, PhD|
Daniel Sodickson: I see this as actually a combined question of the best care and of perception. The public has been scared of radiation exposure, and there has been a lot of press coverage about these kinds of concerns. It is not an uncommon patient who wonders, if they get a CT, if there is a risk associated with that. There are a number of studies that show that the risk becomes appreciable, really, largely for people getting really large numbers of scans over time, as Ricardo was referring to. For example, there is work by my brother, Aaron Sodickson, MD, PhD—who co-developed the idea for this work with us and heads our Brigham and Women's Hospital subcontract—that indicates it is these long-term exposures that are extensive. Nevertheless, the public perception is important to deal with. And, at the same time, making every CT scan have the lowest possible dose is a goal that has been shared by physicians and companies for a number of years now, and it really has become one of the central directions of CT for the last five years or so.
Can you tell me a little bit about this specific research? How does use of the collimator result in such a potentially significant drop in dose?
RO: We would like to share this idea, and then hopefully everyone will be working on this in a few years. In this project, we are proposing an alternative approach to reducing radiation dose. What we are trying to do is, explore the fact that these CT images are compressible and use this fact to reduce the number of data points. Conventional wisdom is that we need at least one data point for each pixel, but we know that images in general are compressible. That raises a question: If we can reduce the size of the images, why do we need to acquire a large data set if we are going to throw away 90 percent of the data at the end?
DS: It’s like jpeg compression on your cell phone. When you send something on your phone, it asks if you want to send it small, medium or large. The fact that you can send it small means there is lots of room to throw something out and still have the relevant information.
RO: That’s right. This data acquisition process is redundant. This idea is known as compressed sensing, and literature has shown really good results with other modalities such as MRI. Now we want to translate all of these ideas from MRI to CT. The thing with CT is, how do we reduce the number of samples in practice?
DS: What we’re doing with this research is coupling two things: a recent revolution in mathematics, compressed sensing, and the idea of this collimator. The bottom line is: We’re blocking most of the X-rays before they get to the patient. We’re taking a substantial fraction and having them just plunge into a tungsten block, so we’re not even letting them get there. In normal CT, that would eliminate a lot of your information, but we’re using this compressed sensing technique to regenerate all of that information, because we blocked it in this sort of quasi-random way. It turns out, using this new mathematics, we can get back the full image just as if we were, for example, looking at a jpeg image. We’ve just pre-compressed it.
It’s this marriage of the math with what is essentially an X-ray blocker that moves around in such a way that we know we can get all of the information back.
Where do you see this technology in another five or 10 years? Is there a timeline on when most facilities could be using this technology?
DS: Success, in our mind, would be that this works well enough that it goes into many new CT scanners that go out into the field. We’ve already started thinking about how we apply such techniques to emerging CT hardware; we think it would be extremely valuable for that.
Also, if you want to go even further than the five- to 10-year horizon, we see this as one of a whole menu of techniques aimed at getting more information out of less. If you think about it, what we’re saying is, we have been taking more data than we need for our average exams. And if that’s the case, then couldn’t we take the same amount of data and get more information out of it?
We’re doing a lot of that in the MRI space using techniques like compressed sensing, but also, in CT, we’re looking to figure out how we actually encode lots of information all at once. From relatively short data acquisitions—which have low dose in CT, which are short in time for MRI—how do we get everything a physician or patient may need using techniques of this sort?
This text was edited for clarity and space.