Peer Review | Beyond the Blame Game

A veteran of the peer-review wars shares a novel approach to instituting a successful program

David B. Larson, MD, MBA, used one measure of hubris and one of humiliation to make radiologists understand his position on peer review during his segment of a three-part special session, “Getting Radiologist Peer Review Right,” presented on December 2, 2013, at the annual meeting of the RSNA in Chicago, Illinois. After sharing several diagnostic misses turned up by the peer-review program in place at a network of hospitals covered by radiologists at Cincinnati Children’s Hospital Medical Center (CCHMC) in Ohio, where Larson was the Janet L. Strife chair for quality and safety in radiology, he engaged the audience in several perception exercises—at which almost everyone failed. The cause was a cognitive limitation that we all possess: intentional blindness.

Preload: Preview
  • Peer review is a much-used (and much-maligned) tool in radiology’s quality-assurance armamentarium.
  • The problem with peer review, as an end unto itself, is that it can engender an atmosphere of blame and retribution.
  • Peer review can be an important tool in building a learning environment for practicing professionals, employing strategies that radiologists experienced in training.
  • One practice is on its way to achieving that goal, but only after ditching scores and performance tracking (after the first two years).

“This is a nice parlor trick, for a lot of people, but for radiologists, this is the real deal,” Larson says. “This is what causes us to miss findings. We see the finding, but we don’t attend to it, and so … we miss it.” 

The key questions are these, Larson says: What should be done about these misses? How can we make sure that they don’t happen again? 

Peer review, around for more than a decade, is a much-used—but heavily disparaged— component of clinical quality assurance. Larson, currently the associate chair for performance improvement in the department of radiology at Stanford University Medical Center in California, describes how one common system works: A prior case is reviewed, and the radiologist either agrees with the report, giving it a score of 1, or disagrees with it, giving it a higher score (commensurate with the degree of disagreement). 

“Then you take all of the scores, add them up, and find out which radiologists need remediation—or possibly, removal,” Larson explains. “Somehow, you use those scores and make improvements.”

A Vicious Cycle

Ironically, in Larson’s opinion, another perceptual issue—hindsight bias—inhibits the effectiveness of peer review. Knowing the outcome at the beginning affects the radiologist’s judgment of the appropriateness of what happened, he says. “It instills a sense of righteous indignation,” he explains. “It creates an adversarial relationship, with the goal of proving guilt or innocence.”

Hindsight bias and other perceptual issues contribute to what Larson refers to as the vicious cycle of peer review. The leader asks radiology to do peer review; radiologists don’t cooperate because they think it’s both punitive and a waste of time; the leader makes it nonpunitive; and then, the leader adds adjudication, thereby canceling the nonpunitive aspect of the program. 

The radiologists still don’t cooperate because they think it’s a waste of time (and they don’t trust it). The leader adds incentives—both positive and negative—so the radiologists halfheartedly participate. Disappointed, the leader makes participation mandatory, and the radiologists do the bare minimum, hide their mistakes, game the system, and protect their friends. The leader gets frustrated with the radiologists, and the radiologists resent the leader.

“At the end of the day, you have a loss of collaboration and the degeneration of feedback, learning, and trust—and you’ve undermined the whole point of the exercise, from the beginning,” Larson says. “If you gain any minimal improvement, which is usually very marginal, it comes at a great cost to your organization. If we are going to design something that is going to prevent errors from happening in the future, this is not the model we want to have.”

Larson urges radiologists to look at the emotional tendencies of colleagues, at peer-review conferences, when forms are filled out and received: Those who receive the forms express disbelief, defensiveness, guilt, and shame. Those filling the forms out experience incredulity and assign blame (or protectiveness, if the transgressor is a friend).

A Positive Model

Following two years of experimentation, CCHMC took a different approach, Larson reports. “Our mantra is that peer review is for learning and improvement only,” he says. “That’s the reason it exists. This is how it works.”

First, reviewing radiologists are encouraged to speak with the interpreting radiologist when they come across a discrepancy. “Have that conversation,” Larson says. 

Second, a strict policy of confidentiality is maintained. Cases are submitted through a dictation system or by email to the peer-review leader, who then reviews all cases, removing any information that would identify the interpreting radiologist. If the interpreting radiologist has not already been notified, the peer-review leader sends a confidential email to the radiologist, looping back to make the radiologist aware of the discrepancy. 

“The last place you should find out about your case is at a conference,” Larson says. “We would do these conferences initially (when the cases were not deidentified), and we’d see half the people writing down the accession number. Of those, half were writing it down to see who the idiot was who missed the case, and the other half were writing it down to see if they were the idiot who missed the case.” 

Third, the department works to fix system-related problems behind the scenes and individual performance issues through education. Deidentified cases of interest are examined by a peer-review committee, and selected cases are reviewed bimonthly, in conference. 

“We assume that every incident has components of a system problem and a performance problem,” Larson explains. “We try to fix the system, and we try to give feedback, coaching, and education to improve performance.”

Conference Mindset: Gain Versus Blame

Learning and improvement are the sole objectives of the peer-review program at work in the radiology department of Cincinnati Children’s Hospital Medical Center in Ohio, according to David Larson, MD, MBA, who was the Janet L. Strife chair for quality and safety in radiology there. Too often, he says, peer-review conferences are poisoned by these common thoughts:

  • how could someone possibly miss this;
  • this is a tough case;
  • this doesn’t deserve to be in peer review;
  • this made no clinical difference, so why is it in peer review; and
  • why are they wasting my time with a simple case?

Instead, the mindset to foster—and the one that best induces learning and improvement—is reflected by these questions:

  • what might cause me to miss this, and how can I prevent that;
  • what is the take-home message;
  • to guard against this error, what techniques have I learned that I can share; and
  • what system changes can we make to prevent this error?

Case Criteria

Faculty members and incoming fellows are reminded annually that peer review is only for learning and improvement, not for reprobation or determination of competency. The criterion of whether the case has educational value forms the basis of how cases are chosen for review. 

Larson offers this example: A radiologist makes a not-readily-perceived finding of supracondylar fracture. In actuality, it was a lateral condylar fracture, with different implications for treatment. “If one person missed this, then chances are there are others for whom this needs to be cleared up,” he says, “so this is one we showed in conference.” Other classifications of studies selected for peer review include watch-out cases, known recurring issues, search-pattern reinforcement, and admittedly minor issues.

Watch-out cases: In Larson’s practice, peer review focuses on the fractures most commonly missed in pediatrics. “These constitute the bulk of our review cases, and we try to get through about 20 to 30 cases in an hour,” Larson says. “We just show them; there is not a lot to discuss. We talk about the findings and the strategies people use to avoid missing those findings; then, we move on,” Larson says.

Known recurring issues: An example, in Larson’s practice, is picking up the notation that the bones are normal from a template, even when the bones are not normal. This is shown in conference as a reminder that radiologists can forget to attend to check marks on templates.

Search-pattern reinforcement: Larson shares a postsurgical case in which the interpreting radiologist cited subcutaneous gas, but left the pneumothorax unnoted. “You could say it didn’t have any implications; it’s not that big a deal—and frankly, it didn’t make an impact,” Larson says, “but if a person is missing it during a postoperative exam (and that is the reason we are doing the exam), then there may be a weakness in this individual’s search pattern for pneumothorax, which could reflect a weakness in all of our search patterns.”

Admittedly minor issues: This is the hardest category of all, Larson says, but it is important, nonetheless. His example is a scoliosis measurement that extended from superior to superior end plates. “By convention, when we measure scoliosis, we are supposed to measure from a superior end plate to the inferior end plate,” he says. “I don’t think we quite reached this level. People roll their eyes and say that it’s not important. It takes a mature organization to be able even to welcome feedback on the minor issues.” 

By emphasizing learning and improvement—and by minimizing negativity (see box)—radiology departments and practices can lift the discussion, at peer-review conferences, above the negative. This allows them to focus on the things that are important: learning and improvement. 

Getting Rid of Grades

Another step that CCHMC took to engage the radiology department in the program was the elimination of a component of peer review that some might consider essential: assigning scores to cases reviewed. Instead, reviewers limit their assessment of a case to agreement or disagreement.  “After two years of scoring cases, we found that scores had no value,” Larson reports. What Larson and his colleagues discovered was that most peer-review conferences were focused on what score a particular case deserved. 

The department stopped tracking performance, too, after discovering that outliers generally missed no more than two cases per year. “You get two misses, and you require remediation? It did not represent overall performance,” Larson explains. “We do not record the identity of an interpreting radiologist, we do not track performance, and we don’t compare radiologist performance. We saw that those had no value.”

Upon hearing this, some radiologists have expressed their own wishes to institute such a program, but have run into objections from the hospital or department. “We don’t report individual performance to our hospital’s provider-evaluation committee,” Larson explains. “The privileging requirement is a participation requirement, not a reporting requirement: Radiologists must attend five of the six annual peer-review conferences.”

When administrators want to know how the department identifies its so-called bad radiologists, the department explains that it uses other means of evaluation, such as 360° feedback (multisource assessment). “If you really want to know how good a radiologist is,” Larson says, “ask your trainees; they are probably the most accurate source of information.”

He also suggests asking support staff; quantifying good citizenship (including participating in programs and conferences, as well as making other contributions to the department) and educational activities; and paying close attention to behavioral problems. “The literature shows an overlap between behavioral problems and interpretive problems,” he explains. “If you address the behavioral problems, which are easier to address, you have addressed a sizable proportion of your performance problems.”

Larson acknowledges that this approach to assessment is not perfect, but neither is peer review. “If anything, this approach is probably more accurate,” he notes, “and this is the type of thing that enhances—rather than detracts from—the learning environment, which is the whole goal of our system.”

Going for the Goals

During training, radiologists become well versed in the fundamentals of skill development. Every skill, Larson says, is developed and maintained through deliberate practice. “This requires mental engagement; goal-oriented practice; feedback on mistakes; a focus on techniques; and repetition, repetition, and repetition,” he emphasizes. “It’s how our skills are gained.”

As practicing physicians, however, radiologists (and others) tend to change their models. “We know that skills deteriorate over time, so we need to have strategies to overcome that,” he says. To maintain skills, it is imperative to create a learning environment that features repetitive practice, frequent feedback, coaching, and mentoring—and an environment in which it is safe to make mistakes. 

In conclusion, Larson believes, peer review—at its best—is just the beginning of performance improvement. “It’s really all about lifelong learning and lifelong improvement,” he says. Larson urges radiology departments and practices to shun approaches that undermine the learning environment and to embrace approaches that foster a learning environment. 

“What is the purpose of peer review?” he asks. “From our perspective, the purpose of peer review is to improve care. Let’s stop trying to identify the bad radiologists; instead, let’s create an environment that is conducive to lifelong learning.” 

Cheryl Proval,

Vice President, Executive Editor, Radiology Business

Cheryl began her career in journalism when Wite-Out was a relatively new technology. During the past 16 years, she has covered radiology and followed developments in healthcare policy. She holds a BA in History from the University of Delaware and likes nothing better than a good story, well told.

Trimed Popup
Trimed Popup