Speed and Efficiency Drive CCTA Technique
When it comes to the interpretation of coronary CT angiography (CCTA) studies, the bywords are speed and efficiency. There are, however, multiple routes to those ends, as witnessed by the different approaches of three veteran CCTA practitioners. All three radiologists interviewed by Radiology Business Journal use, to a greater or lesser extent, advanced visualization tools to interrogate the images, and all report that radiology plays a key role in the acquisition and interpretation of the studies at their respective institutions. imageJoseph Schoepf, MD Old-school Approach The department of radiology at the Medical University of South Carolina, Charleston, acquires the CCTA study using two dual-source CT scanners, one in the emergency department and one in the newly opened Heart and Vascular Center. The university takes a collaborative approach to the interpretation of the images, so radiology partners with cardiologists on collaborative readings, but radiology generates the reports. Schoepf has been reading CCTA for 10 years, starting with an EBCT scanner and less-than-ideal studies generated by a four-slice scanner and rudimentary software tools. “I have gone through a very long process of interpreting these and, for the longest time, we didn’t have any fancy tools available,” he says. “I learned it the hard way, reading from my transverse sections—which, in the initial years, was all that we had—and I am still doing that. I stick with my transverse sections for as long as I can, and only if I notice that I am getting myself in trouble do I turn to the more sophisticated tools.” Schoepf’s advice to those who want to begin interpreting CCTA is to develop a system for approaching these cases in the most time-efficient way before starting to explore the advanced visualization tools. “I hear terrible numbers out there,” he reports. “Some people take an hour for the interpretation of a CCTA study; like that, you can never be effective.” Schoepf uses a three-step approach to CCTA, depending on the complexity of the case, but he always begins with the transverse sections for a sense of the overall quality of the dataset. He says, “I become familiar with potential artifacts that may pose diagnostic pitfalls; I can detect incidental pathology in the lung fields or in the mediastinum, which would be almost impossible to diagnose with more advanced visualization software; and I tune my eye in to those areas with atherosclerotic plaque. If I am dealing with a case that has no (or almost no) atherosclerotic plaque, I stick with my transverse sections.” With more complex cases, in terms of the atherosclerotic disease burden, Schoepf uses the multiplanar reformat and the coronal and sagittal reformatted images created, per default, by technologists for every patient (for a better appreciation of stenosis severity than the transverse sections alone allow). The presence, in a study, of coronary stents or heavy calcifications with extensive blooming artifacts sends Schoepf to more advanced visualization tools. “The most important ones, and the ones that are my first choice in my clinical practice, are automatically generated curved planar reformats,” Schoepf notes. “That is my three-step approach, depending on the complexity of the case and interpretation, and I use that because I want to be time efficient,” he explains. “I don’t want to waste the time it takes to get a case loaded and processed on a workstation if I have a simple case that can be solved based on the transverse sections. Time efficiency is the major goal of that approach.” Viewing the transverse sections and the curved multiplanar reformats is fairly straightforward, Schoepf says, but he adds a word of caution about viewing maximum-intensity projections (MIPs). “The use of MIPs is still recommended by some; however, I think they have the inherent danger that you will miss pathology if you choose inappropriate visualization settings, such as the thickness of your maximum-intensity sliding thin slabs, for instance,” Schoepf warns. “I would discourage the use of MIPs and stick to transverse sections, multiplanar reformats, and automatically generated curved multiplanar reformats.” Preprocessing Proponent The CCTA program at Jewish Hospital in Louisville is currently radiology driven, from the administration of beta-blockers to the interpretation and monitoring of the studies. Half of the referrals come from cardiology and the rest come from primary care. For Falk and the radiologists at Jewish Hospital, the goal is to be as efficient and as rapid as possible so that they will not be routinely spending 20 or 30 minutes reading cases. Preprocessing by technologists is a key strategy. “Obviously, some cases take that long because they are very complicated,” he notes. “I am a firm believer that technologists should preprocess the images prior to the radiologist sitting down to interpret them, and that the radiologists should be given a protocol-based set of images.” In transferring the bulk of the preprocessing tasks to the technologist, the time to interpretation of a straightforward study is reduced to an average of three to six minutes for the radiologist, while the technologist spends 15 to 40 minutes, depending on the study’s complexity. “What really takes time for the technologist is if the heartbeat is irregular or rapid, and we are having to take different phases of the cardiac cycle to patch together a cardiac study. That can turn a 15-minute preprocessing time into 45 minutes,” Falk says. Likewise, such a case can take 20 to 30 minutes for a radiologist to read. “If I were having to trace out those vessels, it could take me 45 minutes to an hour to read a difficult case, because you are picking all of those things out, especially a case with multiple bypasses and pathology above and beyond,” Falk explains. “It’s just nicer if all of those arteries are laid out for you and labeled. You really have to create a permanent medical record of what you’ve done, and that includes labeling arteries—all of those relatively mundane tasks that you’d really rather not be doing.” The set of images that Falk prefers to receive from the technologist includes a series of thicker-slab sliding MIPs through each of the coronary arteries, curved planar reformats spun every 10° to 15° for 360°, cross-sectional views through any obvious plaques on the coronary arteries, and 3D surface views of the heart—as well as the raw data. “I like to have all of those things on the various coronary arteries, because you never know what view is going to be the view that helps you,” Falk says. “In some cases, it’s the thick MIP; in other cases, it’s the curved planar reformat, depending on what the pathology is. If a case is normal, you are really going to get that off of the axial slices and off of the curved planar reformats. You may not even need to look at the lumens, for example. Those, to me, are question-answering views.” Falk says that the most important tools, among all those available for postprocessing cardiac images, are the server technology and the thin-client technology. “Particularly in cardiac, there is a need for actually getting your hands on the data yourself, and being able to jump in and answer additional questions based on what you are seeing,” he says. “More important, to me, than the features of the interpretation software is having a robust thin-client environment where I can easily go from reading the case to manipulating and processing the case, so I don’t have to get up and walk across the room to a workstation. If there is a segment of the left anterior descending artery that I need to see a little bit differently from what the technologist did, I can do that myself with a minimal interruption of the workflow, get the answer that I need, and move on down the road.” The software itself needs to give radiologists the ability to trace vessels quickly and accurately, Falk says, preferably with the ability to trace off of any of the different projections. “For example, if I can deposit one point on a coronal and another point on an axial, that helps me, because sometimes it’s easier to deposit your point right in the center of the vessel, and you can get a more accurate reproduction if you can go from one projection to another,” he explains. The Wish List Wake Forest University operates a joint program between radiology and cardiology, but radiology currently is performing and interpreting approximately 80% of the cardiac CT studies and overreading the noncardiac structures in the remaining 20%. Radiologists also perform all post-processing without an assist from technologists. For those patients with a heart rate under 65, Entrikin and his Wake Forest colleagues have moved to axial step-and-shoot imaging, which acquires image data through one phase of the cardiac cycle. “For those examinations, the most important thing is having a very quick, accurate, and simple way to extract the coronary arteries to perform a stenosis review,” Entrikin reveals. Entrikin believes that the software applications for this functionality are very similar across vendors, with differences arising in their speed, reliability, and availability. To prevent radiologists from being tied to one workstation or one area of the hospital a thin-client application is very useful, Entrikin says. As for other functions of the software, Entrikin sees big differences among vendors in terms of looking at wall motion and calculating ejection fractions. “Some vendors have this down to a science in terms of ease of use, while other vendors make the process unnecessarily cumbersome,” he notes. Overall, Entrikin believes, the software applications of many vendors do not deal well with difficult cases, in which, perhaps, the gating was not optimal, due to heart rate variability or a more rapid heart rate. “For those of us who don’t have dual-source scanners, those issues come up on a somewhat regular basis, and whatever software we are using should allow us an easy method for review of challenging cases, and a lot of them don’t,” Entrikin states. “A lot of them are optimized for the perfect case situations, and if it isn’t a perfect case, the software is somewhat user unfriendly.” Entrikin says he uses automated functions provided by advanced visualization less and less. “If it’s an imperfect case, the automation will oftentimes spit out garbage data, garbage information, and garbage postprocessing,” he explains. “When that occurs, in order to correct that garbage, it takes far longer than just doing it myself in a simple old-fashioned, nonautomated fashion from the beginning.” At the top of Entrikin’s wish list for advanced visualization software is a reliable method of extracting the coronary arteries, with the option to verify that it did so correctly in a very quick, point-and-click manner. Add to that a feature to calculate the ejection fraction reliably that would again allow for quick verification by the radiologist. “The other thing that I think would be useful is an automated tool that will produce moving images for us to put onto PACS,” Entrikin adds. “When a cardiologist orders an echocardiogram, many moving images of the heart are produced, including three long axis views and some short axis views of the left ventricle. There is simply no reason why a postprocessing application should not be able to generate the standard long and short axis cine image sequences automatically, allow for user verification that they are correct, and then send all of these series to PACS automatically upon verification.” While he notes that he can manually generate selected image sets and transfer these individually to PACS, Entrikin would like to be able to import several user-approved image sets at a time automatically into PACS directly from the advanced visualization application. “If I make an assessment about wall-motion abnormality, it would be nice to have images on PACS for everyone to see to allow them to make their own conclusions,” he explains. “Most PACS are currently not very friendly for moving images; however, even if they can display cine loops easily and effectively, the PACS is never going to get the images if the postprocessing software doesn’t produce them. I can do this manually, but it eats up a lot of my time,” he says.
Trimed Popup
Trimed Popup