The Top Five Medical Imaging IT Projects of 2010

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

When Radiology Business Journal was founded four years ago, it was with the understanding that IT represented not just the platform for image interpretation, exchange, and archiving, but also a broad foundation for practice operations, communications, and financial analysis. Earlier this year (and with that in mind), we approached the Society for Imaging Informatics in Medicine (SIIM) to collaborate on a competition to recognize the remarkable innovation we were witnessing in medical imaging, across practice settings and practice domains.

We believed that the combined resources of our two organizations would generate more interest and offer greater results legitimacy for a competition to identify the Top Five Medical Imaging IT Projects of 2010. Our sincere thanks go to Anna Marie Mason, SIIM’s executive director; to the SIIM board, for approving the idea and agreeing to judge and publicize the contest to SIIM members; and (last, but not least), our panel of six judges.

We received 28 entries in the categories of clinical, interoperability, communications, business-intelligence, and security projects: All of them were very interesting, many were excellent, and five of them received the highest marks from the judges. The criteria were innovation/ingenuity; meeting a critical, urgent, or unmet need; improving quality; validating/evaluating a tool; and having the potential to be generalized to other institutions.

Here are the winning entries (edited for length and style), along with some insight into the work of the innovators who submitted them. We thank all of you who took the time to enter and invite all imaging informaticists to look for our second annual contest early
in 2012.


Peer-review System With Brains
image
Yun (Rob) Sheu, MD, is a radiology resident at the University of Pittsburgh Medical Center in Pennsylvania.

Peer review is often looked upon as inefficient and without objective results by many radiologists, despite its educational value and improvement of patient care, Sheu notes in his winning entry. He was convinced, however, that there was a better way. “Applying a mathematical cost model to guide the selection of radiology exams for peer review is feasible,” he says. Barton Branstetter, MD, was the principal investigator, and three mathematicians collaborated with the radiologists: Elie Feder, PhD; Igor Balsim, PhD; and Victor Levin, PhD.

While peer review is an essential component of radiologists’ practice, the increasing constraints on a radiologist’s time require this process to be as efficient and effective as possible. “Our eventual goal is to streamline the process more and build the model into one of the new electronic peer-review systems, ACR® RADPEER™ being an example,” Sheu continues. “To our knowledge, this has not been done. Currently, the informatics department at our institution is working on an in-house peer-review system that is incorporated in the PACS, and we hope, eventually, to use our model in the system.”

Winning Entry 1

Problem: Although advances have been made in incorporating peer review into the daily workflow, cases to be reviewed are still selected at random, without consideration of prior errors or the consequences of those errors.

Solution: Starting in 2009, and using data collected over a period of several years, we created a computer model that calculated the cost for 12 categories that can be used to target areas of weakness; cost is defined as the liability addressed per unit of peer-review time. Given a unit of peer-review time, the cost function represents the expected cost (both financial and medical) to the hospital and patient, if the error is not fixed.

Four attributes of past errors were used to calculate cost: morbidity, financial expenditures, probability of occurrence (based on past data), and the time needed for peer review of the study in question. Our model determined the modality and body part for each radiologist who had, based on past errors, the greatest potential for future liability. This information would then allow a peer-review committee to pick review cases selectively for a given radiologist to achieve a more efficient review, maximizing the statistical likelihood of discovering a true area of weakness.

A large sample of more than 64,000 significant discordances—based on overnight preliminary reports—over a five-year period was compiled. Discordances were adjudicated by specialty-trained radiologists. The preliminary and final diagnoses