A veteran of the peer-review wars shares a novel approach to instituting a successful program
David B. Larson, MD, MBA, used one measure of hubris and one of humiliation to make radiologists understand his position on peer review during his segment of a three-part special session, “Getting Radiologist Peer Review Right,” presented on December 2, 2013, at the annual meeting of the RSNA in Chicago, Illinois. After sharing several diagnostic misses turned up by the peer-review program in place at a network of hospitals covered by radiologists at Cincinnati Children’s Hospital Medical Center (CCHMC) in Ohio, where Larson was the Janet L. Strife chair for quality and safety in radiology, he engaged the audience in several perception exercises—at which almost everyone failed. The cause was a cognitive limitation that we all possess: intentional blindness.
“This is a nice parlor trick, for a lot of people, but for radiologists, this is the real deal,” Larson says. “This is what causes us to miss findings. We see the finding, but we don’t attend to it, and so … we miss it.”
The key questions are these, Larson says: What should be done about these misses? How can we make sure that they don’t happen again?
Peer review, around for more than a decade, is a much-used—but heavily disparaged— component of clinical quality assurance. Larson, currently the associate chair for performance improvement in the department of radiology at Stanford University Medical Center in California, describes how one common system works: A prior case is reviewed, and the radiologist either agrees with the report, giving it a score of 1, or disagrees with it, giving it a higher score (commensurate with the degree of disagreement).
“Then you take all of the scores, add them up, and find out which radiologists need remediation—or possibly, removal,” Larson explains. “Somehow, you use those scores and make improvements.”
A Vicious Cycle
Ironically, in Larson’s opinion, another perceptual issue—hindsight bias—inhibits the effectiveness of peer review. Knowing the outcome at the beginning affects the radiologist’s judgment of the appropriateness of what happened, he says. “It instills a sense of righteous indignation,” he explains. “It creates an adversarial relationship, with the goal of proving guilt or innocence.”
Hindsight bias and other perceptual issues contribute to what Larson refers to as the vicious cycle of peer review. The leader asks radiology to do peer review; radiologists don’t cooperate because they think it’s both punitive and a waste of time; the leader makes it nonpunitive; and then, the leader adds adjudication, thereby canceling the nonpunitive aspect of the program.
The radiologists still don’t cooperate because they think it’s a waste of time (and they don’t trust it). The leader adds incentives—both positive and negative—so the radiologists halfheartedly participate. Disappointed, the leader makes participation mandatory, and the radiologists do the bare minimum, hide their mistakes, game the system, and protect their friends. The leader gets frustrated with the radiologists, and the radiologists resent the leader.
“At the end of the day, you have a loss of collaboration and the degeneration of feedback, learning, and trust—and you’ve undermined the whole point of the exercise, from the beginning,” Larson says. “If you gain any minimal improvement, which is usually very marginal, it comes at a great cost to your organization. If we are going to design something that is going to prevent errors from happening in the future, this is not the model we want to have.”
Larson urges radiologists to look at the emotional tendencies of colleagues, at peer-review conferences, when forms are filled out and received: Those who receive the forms express disbelief, defensiveness, guilt, and shame. Those filling the forms out experience incredulity and assign blame (or protectiveness, if the transgressor is a friend).