4 guidelines for transitioning from peer review to peer learning

More and more radiology departments and imaging facilities are moving from peer review to peer learning to educate radiologists as part of a collaborative, continuous learning model. Peer learning is seen as a more constructive, supportive way of looking at errors than peer review, which critics suggest can make colleagues feel shameful and unsupported.

A new article published in the American Journal of Roentgenology highlights the best practices on how organizations can make successfully the transition from peer review to peer learning.

  1. Discuss the transition with colleagues.

Practice leaders must take the initiative to discuss the transition with their colleagues, the authors explained. If communicated well, initial skepticism of the transition can turn into support.

  1. Proactively identify learning opportunities.

Randomly selected cases rarely illustrate impactful and meaningful errors. Leadership should be vigilant in identifying cases that have serious errors as they provide a significant learning experience.

“The time commitment to randomly auditing cases is much greater than actively pushing identified learning opportunities, and the yield of identifying meaningful cases is lower,” wrote lead author Lane Donnelly, MD, Texas Children’s Hospital in Houston, and colleagues. “Some of our institutions have already abandoned or are considering abandoning random auditing of imaging cases. Others believe that specific types of deficiencies, such as inappropriate follow-up recommendations and incorrect reporting structure, can be identified by means of random audit, and this provides a mechanism to better identify these types of learning opportunities.”

  1. Qualify the error rather than just giving it a number.

The British Radiological Society has eliminated the practice of scoring an error as they feel it has no value. Scoring an error, rather than qualifying it or discussing it in a proactive manner, can hinder productivity and learning experiences, according to Donnelly and colleagues. Often times at peer review sessions, there is more discussion about what score a case should receive than about actually learning from the error.

“Many have found peer scoring to be a nonproductive aspect of traditional peer review because it tends to foster defensiveness, be extremely subjective and unreliable while giving a false impression of accuracy, and distract from the true objectives of individual and organizational improvement,” the authors wrote.

Instead, they suggested, cases could be classified as “great call” or “learning opportunity” to facilitate learning. A great call case is one in which the radiologist made the correct finding and interpretation, but perhaps another radiologist may have made a mistake. A learning opportunity would be used to classify errors.

  1. Organize a peer learning program and conduct effective conferences.

An individual for a small practice can be designated as the “peer learning leader,” and a group of individuals, perhaps by specialty, can manage the peer learning efforts for larger practices and departments. Their primary objective is to identify suitable cases to review.

For smaller practices, one peer learning conference will be adequate. For larger departments and organizations, though, separate conferences by specialty will be needed. Radiologists can also be encouraged to attend conferences in subspecialties that are different from their own as part of learning.

The frequency of conferences is dependent upon each practice, though it is recommended to participate in a monthly or bimonthly peer learning conference.