Peer learning continues to improve radiologists’ educational opportunities, engagement

Transitioning from a score-based peer review program to one based on peer learning boosted engagement and satisfaction in radiologists at Kaiser Permanente in Denver, according to research published in the American Journal of Roentgenology.

“Although RADPEER score-based peer review has been used as the prototype program, it focuses on calculating provider error rates rather than fostering continuous quality improvement,” wrote first author Richard E. Sharpe, Jr., MD, MBA, of Kaiser Permanente, and colleagues. “Furthermore, no evidence exists that score-based peer review programs, such as RADPEER, result in meaningful practice management.”

Kaiser Permanente’s score-based peer review program transitioned to an “open, inclusive, education and improvement-oriented peer learning program.”

Specifically, the researchers implemented the Institute of Medicine (IOM) guiding principles to reduce diagnostic errors. IOM aims to promote collaboration, teamwork, culture and system improvements to allow radiologists to learn from mistakes and to reduce diagnostic errors and near misses in clinical practice.

The researchers sought to compare the efficacy of their peer learning program compared to the score-based peer review system.

The 79 percent of radiologists who completed the preintervention survey felt that score-based peer review was ineffective or unable to improve group or individual practices. The respondents exhibited a strong willingness to participate in a peer learning program. After switching to peer learning, the researchers found:

  • The number of participating radiologists jumped from 5 to 35.
  • Submissions increased from three discrepancies per month to 36 learning opportunities.
  • The average learning opportunity distributions to radiologists increased from 18 to 352.
  • Improvement projects performed during the study periods increased from five to 61.
  • The average monthly continuing medical education credits earned by radiologists increased from approximately eight to 51.

The authors noted the 89 percent of radiologists who completed the postintervention survey agreed that peer learning was education-focused, improved patient care, effectively distributed learning points to the team, identified learning points from the practice, engaged radiologists in quality improvement, felt inclusive, facilitated discussion about improvement opportunities and felt non-punitive.

Overall, the radiologists rated peer learning as superior to score-based peer review, because participants felt they had personally benefited, improved and learned from misreads or near misses. Additionally, it was not damaging to personal relationships by allowing for constructive feedback.

“Codifying the basic requirements of a peer learning program could be useful in allowing these programs to be considered by regulatory bodies, such as the American College of Radiology (ACR) and the Joint Commission, to be superior or at least equivalent to score-based peer review for accreditation purposes,” the researchers concluded.

""

As a senior news writer for TriMed, Subrata covers cardiology, clinical innovation and healthcare business. She has a master’s degree in communication management and 12 years of experience in journalism and public relations.

Trimed Popup
Trimed Popup