DICOM or Nothing:The Case for Informatics Standards in Quantitative Imaging

As radiology enters the era of quantitative imaging, it is well advised to carry with it an old friend, the DICOM standard, according to David A. Clunie, PhD, CTO of CoreLab Partners. He lays out his case in “(Informatics) Standards for Quantitative Imaging,” which he presented on November 28, 2012, at the annual meeting of the RSNA in Chicago, Illinois. Clunie is hardly a disinterested observer. His informatics credentials include serving as editor of the DICOM standard, as cochair of the Integrating the Healthcare Enterprise Radiology Technical Committee, and as a member of the RSNA’s Quantitative Imaging Biomarkers Alliance. Although he works primarily in research, his message has serious implications for clinical practice and commercial product development: Standardization is no less important in quantitative imaging than in image acquisition, and it will play a pivotal role in the transition from traditional narrative reporting to quantitative reporting. The DICOM standard initially focused on standardizing the images that come from imaging modalities, but it has migrated into clinical applications that yield results for cerebral blood flow, regions of interest (to get the size of Hounsfield units), PET standardized-uptake values, and measurements—usually distance, but increasingly, area and volume. “In other words, quantitative imaging is nothing new; it’s just that it is growing in importance and expanding in scope in terms of its range of clinical uses,” Clunie says. “In a way, quantitative imaging (as we think of it now) has a different emphasis than traditional narrative reporting. What is different is a focus on greater rigor in deployment, with respect to quantitative imaging.” In narrative reporting, you see an image on the screen and compare it with a previous time point visually, after which you dictate your opinion. Quantitative reporting goes beyond mere description; it proceeds to the analysis, measurement, and coding of information that can be reused in downstream systems. “We can use the same standards; we just have a greater need for numbers and codes within the standards and a greater amount of structure in the output,” Clunie says. Traditionally, he notes, the radiologist looked at an MRI exam of the brain, saw something, and said that there was a hairy mass present. There might have been some context to indicate that it was a glioblastoma, but today, there is more to it: The radiologist uses an automated or semiautomated tool to attempt to find the lesion boundaries and then, the tool reports the volume, which is critical for looking at progression of the lesion over time. Quantitative imaging depends upon the precision of both the numerical and the categorical information, Clunie emphasizes. In addition to the lesion measurements, the following coded information is desirable: characteristics of the finding, anatomical location of the finding, coded indication of why the study was performed, time of exam, and a code representing the finding itself. The goal, Clunie explains, is not just a piece of plain text, but a searchable, repeatable, and consistent account of the exam results, using a standard lexicon or lexica that can be reviewed retrospectively by the physician (or prospectively, in the case of a randomized clinical trial). Using the example of plotting the change in a lesion’s volume over time, either for an individual patient or a group of patients, Clunie emphasizes the importance of being able to extract key data from a patient’s record. “In order to be able to do this downstream analysis, your report must not be a dead end,” he explains. “The information we are producing should be more than just cutting and pasting or transcribing.” Acknowledging that dictation continues to be the dominant form of reporting, Clunie adds, “There is no reason that dictation cannot be assisted by the quantitative information that has been extracted upstream—perhaps by the modality; perhaps by an analytic tool that has been run before the images got to you; or perhaps by a tool that you use interactively as you dictate, using prepopulated merge fields.” Beware the Pretty Pictures Screenshots and DICOM–encapsulated PDFs have gained popularity as more modalities have gone quantitative, Clunie notes, and these can, indeed, save attractive tables and graphics, either from the modality or from the reporting application. The problem with these pretty pictures is that they are the proverbial dead end to which Clunie refers, understandable by a human, but not by a program. “It is much better to be able to produce a structured output from your work that includes the numbers and codes—because you can use that for comparison next time, it is searchable and minable, and it is also the basis for quality-improvement measurements. As the government becomes more interested in evaluating your performance, the ability to provide quality measurements in an automated manner, rather than having to run around and build up your submission manually, becomes important,” he says. The answer, Clunie says, is the adoption of informatics standards that support the infrastructure to achieve this objective. He says, “When you make a decision about something or produce a report, it needs to be stored, reused, and accessible for medicolegal reasons—but more important, for patient care.” Given the example of change over time in the individual patient, it is not only possible but practical to interpret an image, encode the findings, and reuse the quantitative inputs and outputs from imaging modalities and reporting systems via DICOM—and only DICOM—Clunie says. The standard, though, has its weaknesses. It all begins with the image, and many radiologists would agree that the problem of reading the output from a given modality largely has been solved in clinical practice through the adoption of DICOM. In some cases, however, the modality or operator does not adequately populate fields for critical attributes that are needed for quantitative imaging: anatomy, protocol, technique, physical units to which the pixel values correspond, amount of contrast used, and the timing of contrast or motion. In a perfect DICOM world, all of this information is automatically entered into the DICOM header; in reality, this sometimes occurs haphazardly. “This is largely a workflow issue,” Clunie notes. “The modalities have the opportunity to copy much of this from the modality worklist if they want to, but that presupposes that the modality worklist from the RIS also contains the information in the first place.” In other situations, information needs to be entered manually because it is not known prior to image acquisition. There also must be a place on the screen to enter the information (and store it in the database), as well as a modality implementation that can copy it to the DICOM header. Sometimes, the standard lags behind innovation: “There is no question that as novel imaging techniques emerge, it takes a while before new attributes are added to the standard,” Clunie says. “Things such as helical-scan pitch took a while to get added to the standard, despite the many scanners on the market. Diffusion b value was also a late addition to the standard, yet diffusion imaging is now an important part of clinical practice.” The issue of encoding measurements generated from images, however, is wrought with controversy—and chaos—Clunie says. To build his case for a more disciplined further use of the DICOM standard in reporting quantitation, he zeroes in on the DICOM objects developed specifically for quantitative imaging: regions of interest; per-voxel values (or parametric maps); and intermediate work products that pass between analytic tools (spatial registration, fiducials, and real-world values). Encoding Regions of Interest In Clunie’s appraisal, there are far too many ways to encode regions of interest, even within the DICOM standard, but he reserves his greatest ire for proprietary formats. “These are evil and must be stopped,” he says. “The same applies to encoding information only in the database.” Until recently, one vendor’s technology encoded all annotations made in its internal database only, and they would not appear on a CD or when migrating data from one PACS to another, Clunie recounts. He also dismisses the DICOM mechanisms for curves and overlays as weak and needing replacement.

image
Figure 1. The DICOM structured report (as visualized); image provided by David A. Clunie, PhD.

He grudgingly approves of presentation states: Though not well suited to quantitation, they are sometimes the only choice available in PACS. “Many modern PACS have moved to encoding a sort of screenshot of an annotation in a presentation state,” he explains. The best choice for quantitation is the DICOM structured report, specifically designed for this purpose, Clunie says—but because it is more work to create and display, it is not always the solution selected by vendors and software developers. DICOM structured reports (Figure 1) have what it takes for effective quantitation of a region of interest: They are organized in a hierarchical structure, and they include codes, numbers, coordinates, and image references, which is why they are widely used in well-established quantitative modalities (such as echocardiography and obstetric ultrasound). “As such, they are very flexible,” Clunie says, adding that this is a problem. “You need to constrain the flexibility using templates. These templates are designed for a specific use; mammography computer-aided detection, for example, is a very popular template that is widely deployed,” he reports. By constraining the flexibility, Clunie explains, interoperability is enhanced between the applications designed to author or understand that template. Structured reports also are easily transcoded to XML and then mined for data using an XML tool of choice. In Praise of Structured Reports Essentially, a DICOM structured report can be viewed as a tree of nodes, with each node being a question and an answer (a name–value pair). “The answer may be a textual answer, it may be a coded answer, or it may be numeric,” he explains. “For example,” he adds, “you can say that necrosis is present or absent, or you can use a different style and say there is a feature that is necrosis. This flexibility presents some challenges downstream, but the commonality in codes used in different implementations allows one to mine this information.”

image
Figure 2.In quantitative imaging, the analysis workstation uses DICOM standardized objects, stored and distributed via PACS, to permit future comparison; image provided by David A. Clunie, PhD.

The template is a table that defines the questions and their value sets (the range of possible answers for any situation). The user might be called on to choose from a list of values manually, or the image processing might automatically populate those fields. “You may have business rules that allow you to define codes to put into the structured report—for example, that a lesion is too small to measure because it is less than some threshold or the limiting resolution of the technique you are using,” Clunie adds. While it is not necessary to know the details of what’s inside a DICOM structured report, Clunie allows, you can expect there to be references to the image containing a lesion, or to either the coordinates of the lesion on the image or a segmentation that represents the lesion. Because many tools now produce segmentations rather than coordinates, it’s also important to support DICOM segmentation objects, which encode regions of interest as arrays of voxels (like images) rather than as contours. Also important are radiotherapy structure sets, which are widely used outside radiotherapy, Clunie says. DICOM radiotherapy structure set objects are based on isocoordinates, using 3D coordinates of regions to treat and/or to spare; these coordinates are patient (rather than image) relative. Segmentations and radiotherapy structure sets are encoded using the standard DICOM binary representation just as images would be, and they can easily be transcoded to other DICOM objects such as structured report or presentation states. You can map the patient-relative 3D coordinates back to the image-relative 2D coordinates via source images or a structured report image library. Parametric Maps In the PET and functional–MRI worlds, radiologists produce numerical or statistical information (a standardized-uptake value or a z score, for example) in which each voxel represents a quantitative piece of information—not necessarily directly related to the signal that is acquired, but derived from some kind of postprocessing. In a similar way, Clunie says, label maps are used to indicate that a voxel corresponds to a particular piece of tissue or a type of tissue (white matter, gray matter, or a probability of being one or the other), and there might be some overlap. DICOM encodes parametric and label maps either as images or as segmentations, respectively, depending on the use case. Such segmentations and images are essentially rasterized voxels in a 3D space. With per-voxel encoding of numeric or label values, it is extremely important to extract the quantitative information and store it as numeric values—again, not as a pretty picture, Clunie emphasizes. “For human consumption, one typically displays a pretty picture where you’ve used color to represent a continuous range of numeric values and superimposed it over an anatomical structure; these are often saved only as a screenshot,” Clunie says. If you preserve the quantitative value of the result, you then have the ability to vary the displayed values for greater or lesser intensity or transparency. More important, you can quantify a region of interest, such as the standardized-uptake value or z score for an individual voxel or a newly drawn region of interest. Clunie acknowledges that DICOM currently is limited to encoding integer values for each pixel or voxel. “Often, the quantitative values are floating-point values; for example, the dynamic range is from 0 to 20, as opposed to from 0 to 65,535,” Clunie notes. “The simple way to deal with that is to scale floating-point values to integers. Just as for CT (for which we often don’t encode Hounsfield units literally, but have, instead, a rescale slope and intercept), in PET, we encode floating-point standardized-uptake values as integer pixel values and we rescale them to the floating-point standardized-uptake values.” Clunie describes how DICOM working groups are being encouraged to extend the standard by adding floating-point pixels, even if only for research purposes. “This is something researchers have long wanted—and modality vendors have resisted—but this will change,” he predicts. The DICOM approach of parametric and label maps can leave fusion (or superimposition) to the application. “There’s no harm in sending secondary-capture images that a normal human can look at to the PACS as well, in case the application is not capable of fusion, but the pretty pictures are no substitute for genuine, quantitative, useful information,” Clunie says. The Workhorses Clunie details the value of DICOM’s intermediate work products for encoding information that passes between analytic tools: spatial registration, fiducials, and real-world values (or physical units). DICOM has registration objects for rigid and deformable registration. Rigid registration uses a matrix to map the relationship between two sets of images that are referenced explicitly or two frames of reference that are labeled by unique identifiers. The DICOM deformable registration object includes matrices, as well as deformation fields. “There is also a specific DICOM fiducial object to record individual locations used in registration,” Clunie adds. “These three objects can be used to save manual or automated results of registration for further work because many quantitative-imaging applications involve registration or segmentation of multiple spatially related datasets.” For instance, radiologists who need synchronized scrolling, synchronized cursors, or fusion across 3D datasets could reuse registration objects that were saved as DICOM objects, Clunie suggests. Real-world value maps are a concept introduced into DICOM because sometimes, a voxel means more than one thing, Clunie explains. “You can interpret a voxel as activity concentration, or you can interpret it as standardized-uptake value body weight, standardized-uptake value lean body mass, or some other parameter,” he says. “The same voxel needs to feed to several different pipelines to compute a different value: We call these real-world value maps.” Real-world value maps might be encoded as linear functions, as lookup tables for things that are not linear, or as some other equation entirely. “It’s a point operation,” Clunie says. “Basically, every voxel goes to a pipeline that takes the range of stored values and maps them to real-world values. This is independent of the grayscale pipeline, which is only intended to get to what we see on the screen.” Clunie’s DICOM case is made: “There are many different objects that are standardized in DICOM, specifically for the purpose of quantitative imaging. The analysis workstation can produce those, and they can be stored and distributed through the PACS, then fed back for comparison next time around,” he says (Figure 2). Codes are also critical, for quantitation, to define the specific aspects of the study, including entities (lesions, tumors, and tissue types); location (anatomic sites); characteristics (edges and enhancements); measurements (volume, sum of areas, and mean); and units (Hounsfield units and millimeters). Many code sources exist already, including the Systematized Nomenclature of Medicine Clinical Terms, or SNOMED; Logical Observation Identifiers Names and Codes, or LOINC; RadLex; the DICOM Coding Scheme, or DCM; the National Cancer Institute Thesaurus, or NCI; and Unified Code for Units of Measure, or UCUM—as well as ACR BI-RADS®, ICD-9, and ICD-10. More are being defined every day, Clunie says. In practice, these standardized codes are widely implemented in radiology only where use is critical and reimbursable (for example, structured reports in echocardiography and obstetric ultrasound, as well as radiotherapy structure sets in radiotherapy planning and quality control). The meaningful-use program and its strong emphasis on codes are likely to benefit radiology in this regard, but Clunie also says that there is a role for individual institutions to play by standardizing internal procedure codes. Adoption Hurdles “We need more widespread tools that implement DICOM to make it easier for researchers and vendors to use,” Clunie says. “We have support for the basic DICOM objects in many libraries and tool kits, but we need more convenient application programming interfaces for high-level abstractions—which make it easier to encode quantitative information without having to get into the down- and-dirty details of DICOM, which are tedious. There is also a steep learning curve.” Clunie adds, “Unfortunately, a major part of the problem is that many product managers do not see the value in sharing or saving the results in the first place. They are happy to consume standard image input, but believe that customers are satisfied with screenshots as output, and that such pretty pictures are sufficient to convey the clinical information necessary. They believe it is sufficient to save and restore the state only locally and internally.” He continues “If you have a sophisticated workstation or server quantitative application, however, when you want to migrate to a new vendor or a new software version, you are screwed. If you want to be able to save and restore the state in a long-term way, you need to do it in a standard way.” Clunie also calls for greater support from RIS, PACS, and modality vendors, as well as third-party viewers, many of which still use proprietary annotation formats (including the popular open-source OsiriX). Clunie says that you might not like DICOM (and many researchers hate it), but there is a long history of vendor and modality support for it. In promoting the standard, Clunie launches his own offensive against what he calls antistandards. “There is a mistaken perception, in the medical-imaging community, that DICOM is only for images,” he says. “DICOM is not only for images: DICOM is for images and image-related information,” and myriad standard objects are available for use. Many vendors store their systems’ output in completely proprietary formats specific to the vendor, model, and version. Private data elements in standard DICOM objects and private DICOM service-object pair classes are better than completely proprietary formats, but they are really just proprietary formats layered on top of DICOM, Clunie says. The homegrown file formats of academic departments are just as bad, he adds. “For some reason or another, they decide that DICOM is inadequate,” Clunie explains, “and research-funding leadership, unfortunately, has been snowed here.” On his list of antistandards, Clunie includes the Annotation and Image Markup, or AIM, project of the now-defunct National Cancer Institute’s Cancer Biomedical Informatics Grid, or caBIG, In Vivo Imaging Workspace as a case in point. In Clunie’s opinion, most gaps exist in implementation and deployment, not holes in DICOM, and he believes that there is no place for proprietary file formats or generic consumer file formats. He says, “They possess no patient and workflow metadata, no support in the PACS, and little or no support in viewers and workstations (other than the ones you just built yourself); anybody can claim something is a standard, but that doesn’t make it so.” Acknowledging a chicken-and-egg problem, Clunie says that new DICOM objects might be introduced in one modality or workstation, but won’t be displayed on another. “The PACS will save it for you—and next week, or next year, or next decade, someone will use it—but in the meantime, you have a patient-oriented place to store it,” he says. In conclusion, Clunie calls for order: “There is no place for nonstandard or inappropriate formats, not just for input, but for output as well,” he says. The best standard for the job of quantitative imaging, he believes, is clear: DICOM. Cheryl Proval is editor of Radiology Business Journal.

Cheryl Proval,

Vice President, Executive Editor, Radiology Business

Cheryl began her career in journalism when Wite-Out was a relatively new technology. During the past 16 years, she has covered radiology and followed developments in healthcare policy. She holds a BA in History from the University of Delaware and likes nothing better than a good story, well told.

Trimed Popup
Trimed Popup