Tracking Outcomes, Protecting Patients: Structured Reporting With Automated Notification

Radiologists armed with robust reporting, notification, and tracking capabilities can prevent many patients from falling through the cracks in (and between) health systems. At ACR 2016—The Crossroads of Radiology®, held in Washington, DC, a three-topic session, “Actionable Findings and Communication: Challenges and Opportunities,” was presented on May 17, 2016. The presenters covered advances in radiology informatics at Brigham and Women’s Hospital, Boston, and Penn Medicine, Philadelphia, that could help ensure that patients with unexpected imaging findings get necessary follow-up testing.

Katherine P. Andriole, PhD, presented “An Integrated Electronic Tool for Communication of Critical Test Results and Tracking of Follow-up Recommendations.” Andriole is associate professor of radiology, director of imaging informatics and director of the fellowship programs at the Center for Evidence-based Imaging, Brigham and Women’s Hospital and Harvard Medical School, Boston.

She said, “The Joint Commission has made improving the communication of critical results of diagnostic-imaging procedures among caregivers a national patient-safety goal. Brigham and Women’s Hospital established an enterprise-wide policy for communication of critical test results.” It uses three color-coded levels to indicate the urgency of test results: red for immediately life-threatening problems, orange for results likely to involve mortality or significant morbidity if not treated urgently, and yellow for findings that are not immediately life threatening.

“We wanted to communicate not only critical findings, but also new or unexpected findings, such as the lung nodule found on abdominal CT,” she explained. Red and orange alerts require interruptive communication (conducted by telephone, by pager, or face to face) between physicians or other licensed caregivers—not administrative staff. Yellow alerts, however, can be delivered using asynchronous methods (such as email).

“We felt that an automated electronic communication system could facilitate communication of test results,” Andriole said, “so we developed Alert Notification of Critical Results (ANCR).” This web-based application conveys urgent and semiurgent results to ordering providers for critical or unexpected exam results.(1-3) ANCR is integrated into the electronic health record.

This is a closed-loop system in that the ordering provider must acknowledge receiving each alert. ANCR alerts can be created for follow-up imaging recommendations when appropriate (and can be combined with red, orange, and yellow alerts as well). Because they might have more information about the patient than the radiologist can, the ordering provider is allowed to indicate disagreement with the radiology recommendation, but must click to do so, making that decision part of the patient’s medicolegal record and closing the communication loop.

Now in progress, Andriole added, is a project to use ANCR to provide feedback and education to residents on call who provide radiology consultations at night. It requires just one or two extra clicks from the attending physician.

In summary, Andriole said, “ANCR automates the communication of critical test results. Surveys indicate that providers are satisfied with ANCR, but improved training in its use might be required.” More than 1,000 radiologists and ordering providers surveyed agreed that the use of ANCR reduced medical errors and improved quality of care.

Reporting Incidental Findings: Code Abdomen

Hanna M. Zafar, MD, MHS, is assistant professor at the Hospital of the University of Pennsylvania and Perelman School of Medicine, Philadelphia. She presented “Code Abdomen: Can BI-RADS Move South of the Diaphragm?”

“In making the radiology report a mineable data source, we’ve tried to combine standardized report language and codes with natural-language processing,” Zafar said. The project, which was spearheaded with Mitchell Schnall MD, PhD, department chair, and Tessa Cook MD, PhD, assistant professor, began through discussion with section heads, in March 2012 on how to report non-emergent actionable abdominal imaging findings. It was followed by feedback from radiologists, generalists, specialists and the hospital’s risk-management department.

 “This large endeavor required the efforts of many people,” Zafar added, with full implementation beginning in August 2013 (and ongoing changes and expansions). The system uses codes to characterize unexpected masses in the liver, kidney, pancreas, and adrenal glands. Radiologists are required, when reporting, to use the Code Abdomen system to indicate how concerned they are that a mass could be cancer when dictating all abdominal CT, and MRI, and relevant ultrasound (US) studies—for inpatients, outpatients, and emergency-department patients.

All masses for which the exam is technically adequate to permit characterization must be classified using nine subcategories, which map to benign, indeterminate, or suspicious. For any indeterminate masses, the radiologist must recommend a modality and time interval for follow-up (for example, CT in three to six months). A separate system, Code Cancer, is used for oncology imaging because there is a need to assess more than the four target organs of Code Abdomen (and because there is less chance that a cancer patient will be lost to follow-up).

“Standardized codes facilitate the identification of detailed clinical data (such as imaging findings),” Zafar said. “They make it possible to measure variability between organs, modalities, and radiologists.”

During the first two years of using this system, July 2013 through June 2015, for the vast majority of patients who received imaging were coded as having either no masses or benign masses. For patients with who indeterminate or suspicious findings, Zafar said, “Most masses were in the liver or kidney, and that’s what many of us who read abdominal imaging would expect. We can also look at codes in a single organ and track how those have changed over time.”

In the liver, for example, a spike occurred in use of the code for examinations technically inadequate to evaluate for focal masses after March 2014, when radiologists became required to indicate a modality and timing for follow-up of indeterminate masses. Prior to this requirement, many radiologists coded lesions that were seen but incompletely visualized on unenhanced CT and portable US exams as indeterminate, regardless of whether they demonstrated questionable or benign imaging features.

After March 2014, radiologists became more conscious of reserving the category of indeterminate masses for lesions that were incompletely characterized—with questionable imaging features where seen, rather than incompletely visualized. Otherwise, they began to appropriately assign these studies as technically inadequate to evaluate for masses.

“We’ve made progress in moving toward measuring outcomes,” Zafar said. “As radiologists, we are interested in many types of outcomes, including imaging and surgery. We can also look at differences in healthcare utilization and costs among patients who have the same codes in the same organ to better understand resource utilization among these patients.”

One of the main outcomes studied by the group so far are patients who are at risk of falling through the cracks: patients with imaging findings of possible cancer (e.g. masses indeterminate or suspicious for cancer) in whom follow-up is clinically appropriate but not completed. Incomplete follow up in these patients  can result in delayed or missed diagnoses.

In more than one-third of cases, Zafar reported, the literature shows that ordering physicians fail to acknowledge receipt of abnormal outpatient imaging results in the electronic medical record. In addition, about a third of physicians do not routinely notify patients of abnormal test results. Without a coding system to reliably identify all patients with imaging findings of possible cancer, “we simply don’t know the proportion of patients for whom follow-up might be clinically appropriate, but is not completed,” Zafar added.

She gave this example of workflow under Code Abdomen: A patient with a hyperechoic mass has a report generated that contains standardized radiology language. “The whole report is sent to the RIS,” she said, “but now, there’s a database that sits on top of the RIS. It can pull information out of the RIS in order to see which patients have codes of possible cancer and whether these patients have relevant completed imaging, relevant scheduled imaging or nothing at all. In this way we can identify how many patients did not receive follow-up.”

To explore the scope of the follow-up problem, the team performed a manual chart review of all patients with masses representing possible cancer during the first 15 months Code Abdomen was used. Medical students reviewed all patient charts, following a strict algorithm, to assign patients into five groups: pathology follow-up; completed relevant imaging; other completed follow-up (e.g. specialist referral); those for whom follow-up was not clinically indicated; and those for whom there was no completed follow-up and no chart documentation as to why follow-up was not indicated.

Charts were reviewed for 675 unique patients. Of those, 164 had died; of the remaining 511, 37% had follow-up imaging, 8% had follow-up pathology, 3% had follow-up using other methods (such as laboratory testing) and follow-up was not clinically indicated for 6%, including those under hospice care. Zafar said, “For 23% of patients, we found no completed follow-up and no documented reason for lack of follow-up—a much larger number than we had expected.”

Further investigation, however, indicated that the 23% figure had been inflated. “Many radiologists were not using codes appropriately because use of the system had just begun,” Zafar explained. For example, initially, masses too small to characterize were being coded as indeterminate, when, in fact, no follow-up was actually required. Another factor contributing to the high number was incomplete chart documentation by ordering providers, who did not always indicate why follow-up was not clinically indicated. In some cases, particularly for inpatients and emergency-department patients, no provider could be identified.”

Zafar added, “An estimated 12% to 24% of patients for whom follow-up is clinically indicated are at risk of loss to follow-up, mainly because of system errors that automated systems can address.” For example, some patients never schedule the testing that providers order. Some providers never order follow-up imaging because of poor handoffs between specialists and the primary-care physician; or out-of-system outpatient providers do not receive the discharge summary or documentation of inpatient imaging.

Further investigation of these unique 511 unique patients by the team revealed that indeterminate or suspicious masses were reported in 590 organs. “We wanted to see how many actually had malignancies (indicated by surgery/pathology findings, available for 55 organs),” Zafar said.

Most of these masses detected were indeterminate, rather than suspicious, but most pathology findings—as expected—were available for suspicious masses. Overall, 75% of indeterminate and suspicious masses (and 78% of kidney masses) were malignant on pathology follow-up. “This ability to correlate codes with malignant likelihood is an illustration of the power of this coding system and what it allows us to do,” Zafar noted. “Through identifying cohorts of patients, so we can start to track outcomes.”

Structured radiology reporting is feasible, but requires persistence. Zafar added, “There’s a long timeline, and buy-in is key—from both radiology and health-system leaders.” Health-system leaders are interested in these systems because this approach can be applied to monitor patients lost to follow-up outside radiology (for example, newly detected hypertension in patients seen for nonimaging tests), she explained.

“We now have more than 100,000 unique patients in our database, and 15,000 of them have imaging findings of possible cancer—but by using the automated system, we can home in on the approximately 5,000 patients who had no completed or relevant follow-up,” Zafar said. “That’s a much more manageable number of patients for whom to begin manual data inquiry.”

Tracking Follow-up Recommendations: ARRTE

Tessa S. Cook, MD, PhD, who also moderated the session, presented “Monitoring Patients With Noncritical Actionable Findings: Can Informatics Solve All Our Problems?” Cook is assistant professor and director of the Center for Translational Informatics at the University of Pennsylvania.

She noted that whether a patient gets follow-up can depend on how the findings are described in the radiology report.(4) “Tracking systems are really important because depending on how the findings and the recommendations are communicated, patients may or may not get follow-up that they need,” she said.

She also emphasized that informatics cannot stand alone in addressing follow-up deficiencies. She asked, “Can informatics solve all our problems? Solutions can be built, but at the end of the day, there is still a human component to this. In the typical workflow, there is a point where the radiologist picks up the phone or sends a message and tells the physician colleague that the patient needs follow-up because something in the study calls for it. The part we don’t have control over, though, is whether that message actually gets to the patient.” Radiologists don’t typically have access to the information that would tell them whether that patient really needs the recommended follow-up.

Cook added, “It’s difficult to take a free-text radiology report and identify a recommendation. It may be easier to extract a particular modality or timing for follow-up, if those are discretely identified, but even that, sometimes, can be expressed in descriptive fashion, throwing another wrench into the process. We chose a hybrid approach: a standardized coding system plus structured templates plus natural-language processing.”

Radiologists’ adoption of the system (three years ago) initially required considerable education, Cook recalled, and as the system evolved, repeated education was needed. She said, “We were telling the radiologists how to report and asking them to commit: Do you really think there’s something in this finding that makes you want to have the patient come back, or are you calling the finding benign?”

Shortly after the system was implemented, Cook explained, the team asked physicians several questions. When they were asked whether they wanted recommendations for follow-up in radiology reports, there was a discrepancy between generalists and subspecialists. The generalists welcomed the system, saying that it was exactly what they wanted to know: what to order and when to order it.

Cook said, “The specialists felt a bit differently. We have a large oncology-patient population, and these patients are carefully plugged into the care system. In the era of patient portals, patients are reading their radiology reports.”

A patient might well ask the oncologist why the imaging follow-up study recommended in the radiology report hasn’t been ordered, creating extra work for the oncologist (who didn’t consider that recommendation appropriate for this patient, but must now spend time explaining that decision). Cancer patients are unlikely to miss follow-up, so for oncologists, the benefit of imaging recommendations is lower than it would be for generalists. “On the other hand,” Cook said, “when we asked what information (if any) ordering providers preferred to receive in a radiology report’s follow-up recommendation, overwhelmingly, the answer was ‘give me both the modality and the timing,’ so we were encouraged by that.”

In addition to the coding/reporting system, the team built a system to mine the resulting data, extracting assigned codes to identify patients who needed follow-up testing and then determining whether those patient had undergone that testing. This system is the Automated Radiology Recommendation Tracking Engine (ARRTE).

It has three modules: the data-mining, compliance and follow-up components. Cook said, “In data mining, we go out directly to the RIS and indirectly to the EMR (through a data warehouse) to get information about the radiology report, the patient, and the ordering provider. We also validate that what we are extracting is correct.”

The compliance module monitors the use of reporting templates, which was mandated by the department chair, so radiologists and trainees receive notifications when they generate reports that are missing the template or any of its required components. The follow-up module monitors whether patients have completed recommended follow-up and notifies the ordering providers, if they have not.

Cook added, “We still pick up the phone if there’s something that we’re worried about, but ARRTE lets us send messages (via EMR and email) to ordering providers for patients who haven’t completed their follow-up and don’t seem to be in the queue to do so.” The notification first asks whether the recipient is the correct provider to ensure follow-up for this finding (for example, so that the notification doesn’t go to an emergency-department physician who ordered the exam, but will not see the patient again). If not, does the recipient know who the correct provider is? If this is the correct recipient, is the recommended follow-up clinically appropriate? “If it’s not appropriate, why not? That question has allowed us to learn a great deal,” Cook said.

The homegrown ARRTE sends messages into the commercial EMR. The physician sees a form stating, “You are the ordering provider or the only provider within the health system for this patient. There’s some pending follow-up. Can you click and tell us a little bit about the individual?” The physician clicks on the link that returns to ARRTE and can look at the particular patient and at the focal-mass summary to see what the recommendation was. ARRTE’s follow-up monitoring keeps a running tally of patients who have completed a follow-up recommendation, who have follow-up pending and who have not completed follow-up.

Because this was a required component of radiology reporting, the team monitored compliance from the beginning. First-pass compliance has consistently been more than 90%, with brief exceptions for system changes. Second-pass compliance (after noncompliant reports have been returned to the reporting radiologist or trainee) is between 98% and 99%, Cook estimated.

During the first year of ARRTE use (July 2013 through June 2014), 28,611 patients were scanned and 2,072 suspicious or indeterminate findings were reported. Follow-up imaging cleared 687 of these patients, cancer was diagnosed in 205 patients, and 273 patients were scheduled for follow-up.

Cook said, “The interesting group, to us, contained the 907 patients who had no imaging follow-up. At the time, that was all we knew about them.” Finding out why they had no follow-up is an ongoing challenge, she added. In some cases, it’s because their follow-up was received outside the health system, and there is no good way to obtain that information. In others, there is no valid contact information for the ordering provider (who might be outside the health system).

“For either group, a fully automated electronic system isn’t going to get you to your end point, and you really need a team of humans to perform oversight and validation, obtaining more information by telephone about both providers and patients. The lessons that the human team learns can then be used to improve the automated system, but there’s always going to be a team behind the system,” Cook said.

She added that the team sometimes debates “whether we should just pick up the phone and call the patient. We have many ways of contacting our patients now; we have patient portals and could certainly call them directly. That’s something that I think we will get to more quickly than we anticipate at present,” she concluded.

Whether or not direct notification is eventually mandated (either by health systems or through legislative/regulatory activity), it is clearly possible to use automated systems to increase the number of patients who actually get the follow-up that their radiology reports recommend. Standardized reporting is likely to improve outcomes tracking as well. With the implementation of stronger systems for radiology reporting, results notification and follow-up tracking, the cracks in healthcare should continue to narrow until fewer patients can fall through them.

Kris Kyes is a contributing writer for Radiology Business Journal.

References

  1. Khorasani R. Optimizing communication of critical test results. J Am Coll Radiol. 2009;6(10):721-3.
  2. Lacson R, O’Connor SD, Andriole KP, Prevedello LM, Khorasani R. Am J Roetgenol.  2014; 203(5):491-6.
  3. Lacson, Prevedello, Andriole, et al. Four-year impact of an alert notification system on closed-loop communication of critical test results. Am J Roetgenol. 2014; 203(5):933-8.
  4. Al-Mutairi A, Meyer AN, Chang P, Singh H. Lack of timely follow-up of abnormal imaging results and radiologists’ recommendations. J Am Coll Radiol. 2015;12(4):385-389.
Kris Kyes,

Contributor

Trimed Popup
Trimed Popup