RadPeer has been around for almost 20 years, allowing radiologists to review one another’s work since 2003. But some clinicians believe that errors are still underreported, fueled by legal and emotional fears.
The American College of Radiology committee overseeing this clinical tool recently polled more than 300 practices on its use. They found that, despite RadPeer facilitating 41 million reviews since its inception, some radiologists are worried about the possibility of being punished for their mistakes, according to a summary published in the Journal of the ACR.
“Although the RadPeer program was explicitly designed to improve quality and facilitate peer learning, respondents frequently cited concerns that errors were underreported because of fear of offending others and to avoid punitive measures,” Humaira Chaudhry, MD, a radiologist with Rutgers New Jersey Medical School, and colleagues wrote Jan. 25.
All told, 1,036 radiology practices, representing 18,000 clinicians, used RadPeer for the collection of peer review data as of January 2019. Those span hospital-based private practices to outpatient providers, academia and multispecialty. Most commonly, radiologists used RadPeer to satisfy accreditation requirements established by the ACR and other agencies (92%), and to bolster patient safety and care quality (76%), the survey found.
“Ease of use” was the most commonly cited positive feedback about the product, Chaudhry and colleagues noted, while lack of integration into PACS/reduced usability was the most common gripe. “Too time consuming” was the biggest reason why practices did not routinely use RadPeer in practice (53%), with 18% saying it “did not produce any significant quality improvement.”
Typical radiologist concerns about RadPeer fell into two categories, the survey found: medicolegal and emotional. About 56% worried about the discoverability of peer review data in malpractice lawsuits, and 38% cited concerns about an adverse impact on physician credentialing. On the emotional side, 52% cited fears about being graded by their peers; 35% felt awkward evaluating senior colleagues; and 20% feared public humiliation.
Some 31% of survey respondents cited underreporting of significant discrepancies at their practices, versus 44% who did not. And reasons for this underreporting fell into the same emotional and medicolegal buckets, along with biases related to nonrandom selection, insufficient anonymity, potential retribution, and an unwillingness to take time to document discrepancies.
A December study in the American Journal of Roentgenology similarly cited concerns about RadPeer’s lack of anonymity and its influence on underreporting of errors. About 38% of respondents to this recent survey said they believed anonymity would improve reporting of disagreements, compared to 30% who felt the opposite, Chaudhry et al. noted.
Based on the results, the team believes RadPeer could benefit from offering more of a “peer learning” environment, where “attention is drawn away from measuring individual performance” and turned toward group advancement.
“The radiology community has clearly integrated peer review into daily practice; however, concerns regarding workflow interruption, medicolegal implications, and emotional stress persist,” the team concluded. “There is acknowledgment of quality improvement benefits of peer review yet a lack of confidence in its ability to improve practice patterns. Creating a nonpunitive peer learning environment may help ease fears while effectuating true quality improvement.”