Peer Review | As Your PQI Project: Bettina Siewert, MD

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
 - Bettina Siewert, MD
Bettina Siewert, MD

On the short list of reasons to use peer review as your practice quality improvement (PQI) project: You probably have to do three projects in the next decade. You will save time (particularly if peer review is a group project). You are likely to be surprised at the rate of interobserver discrepancy—and the rate at which you disagree with yourself, six months later.

As part of their maintenance of certification (MOC), radiologists whose certificates expire in 2018 or thereafter will be required to complete a PQI project every three years, according to Bettina Siewert, MD, who presented “Peer Review As Your PQI Project” during a special session on peer review on December 2, 2013, at the annual RSNA meeting in Chicago, Illinois. If your practice is already doing peer review, she says, your project is 40% complete.

Siewert, section chief of abdominal imaging and vice chair of quality, safety, and performance improvement at Beth Israel Deaconess Medical Center (BIDMC), Boston, Massachusetts, shares her own performance history—from a peer-review database—to illustrate how a PQI project works. PQI follows a familiar quality-improvement cycle: plan, do, study, and act. Start with planning: collect data, analyze them, and develop an improvement plan. Implement practice change, and monitor the outcome; if you’re unhappy with that outcome, commence another cycle.

Siewert urges radiologists, before starting a project, to attest to their participation using the American Board of Radiology (ABR) website ( www.theabr.org). “It was recently redesigned, is easy to use, and will guide you through your project,” she says. “The plan–do–study–act cycle is on the website, and it will guide you with questions and prompt you to do the next step.”

In choosing a project, the most important criteria are for it to be easily integrated into your practice, for it to represent a perceived gap in practice (potential for improvement), and for it to have measurable results. If all practice members participate, there is a greater opportunity to have an impact on overall performance and to concentrate administrative duties into fewer hands. “It’s even better if it is suitable for submission to a national registry,” Siewert notes.

Baseline performance review: In the BIDMC peer-review database, Siewert saw the cases that she submitted, how she ranked her colleagues’ performance, her reviewed cases, and her performance (individually and in comparison with that of others in the department). In addition to cases with a score of 3 or 4 (indicating the two highest disagreement levels), she paid attention to cases with a score of 2 (mild disagreement). “I am interested in what kinds of subtle findings I missed,” she explains.

Gap analysis: She looked for a common theme in the database (by organ, modality, pathology, or type of error) and then performed a root analysis to identify the error as perceptual, interpretive, communicational, or procedural. “The most important thing is to identify learning opportunities—and how you can avoid these issues in the future,” she emphasizes. Siewert’s misses were primarily found on liver CT and ultrasound images (reflecting her practice), involved cystic lesions, and were perceptual in nature.

Root causes: In her liver cases, Siewert saw an opportunity to sharpen her interpretive skills. “The perceptual misses were not liver masses, but diffuse liver disease, where I had some difficulty identifying (on ultrasound) signs of cirrhosis and fatty liver,” she says.

In the communications category, Siewert’s colleagues disagreed with her recommendation for short-term follow-up (rather than an MRI exam) for a lesion seen on CT. In two ultrasound cases (involving very fatty and very heterogeneous livers), her colleagues thought that she should have recommend another modality because focal liver lesions could not be excluded.

Develop a practice-improvement plan: Options include a literature review, a self-assessment module, a CME course, or a proctored case-interpretation session. Many medical societies offer these services.

Measure again: After the improvement plan has been implemented, measure performance again, and (if necessary) complete another cycle. Siewert’s peer-review database showed that another cycle was unnecessary. “This project was done in early 2012; fortunately, there’s nothing thereafter, in regard to the liver,” she says.

Measuring improvement can be tricky, since colleagues could be looking at different case