Can the deployment of decision-support systems cut overutilization of advanced imaging exams without doing harm to care quality? In so many words, that was the main question the Medicare Imaging Demonstration (MID) set out to answer when it began in October 2011 and wrapped up in September 2013.
Earlier this month the Rand Corp., commissioned by CMS’s Center for Medicare and Medicaid Innovation (CMMI) to conduct the project, sent its report on the five-convener, 27-practice endeavor to Congress.
“[W]e found no evidence that the intervention led to anything beyond a small reduction—if any reduction at all—in advanced imaging volume,” the report authors wrote in their concluding discussion.
For some observers, the most telling conclusion had to do not with where the demonstration arrived but how it got there. The report data showed that well over half the orders that could and perhaps should have been included—65%—could not be gauged for appropriateness in both the baseline period and the intervention period because they were “not covered by guidelines.”
As National Decision Support Company CEO Mike Mardini sees it, this outcome points to a fatal flaw in the design of the demonstration.
“The doctors weren’t required to enter an indication,” Mardini told RadiologyBusiness.com. “If they were lazy, or if the patient ’s specific clinical scenario wasn’t covered by the guidelines, the doctor could click ‘Other’ or just ignore the prompt for entering a reason for the exam altogether. All that was lumped into ‘not covered by guidelines.’”
Mardini’s conclusion: “This demo was doomed to begin with.”
Mardini’s disappointment with the MID largely owes to his keen interest in seeing clinical decision support (CDS) system technology continue to rise in radiology as performance-based reimbursement models squeeze radiology practices.
“How many people are going to point to this demo and say, ‘Decision support doesn’t work?’” said Mardini, adding that the “incomplete criteria and inconclusive results” are not representative of the field of CDS.
Mardini’s company markets ACR Select, a web-based system for applying the American College of Radiology’s trademarked appropriateness criteria. The Lewin Group adapted guidelines from ACR and 11 other specialty societies for the project, and three different delivery platforms were used by the five conveners.
The CMMI project looked at 11 advanced imaging procedures—Spect myocardial perfusion imaging, MRI lumbar spine, CT lumbar spine, MRI brain, CT brain, CT sinus, CT thorax, CT abdomen, CT pelvis, MRI knee and MRI shoulder. These were selected based on high expenditures and utilization in the Medicare fee-for-service population, along with the availability of relevant guidelines for medical specialty appropriateness, according to the MID website.
Mardini pointed out that the demonstration looked at around 140,000 exams ordered by about 4,000 ordering physicians. “If you do the math, that equates to just a little over one order per month per physician,” he said. “NDSC's commercial clients do 200,000 decision-support transactions per month using the full ACR Select AC.”
Questionable as the MID methodology may have been, the data showed positive impact due to DSS. “Compared with the baseline period,” the report authors wrote, “all but one convener showed an increase in the rate of appropriate ordering—with decreases in the rates of uncertain and inappropriate ordering—for final rated orders after physicians received DSS feedback on their orders in the intervention period.”
Conveners ranged from 75.1% to 83.9% in their rates of appropriate ordering during the intervention period, with rates of uncertain ordering between 11.1% and 16.1%, and rates of inappropriate ordering between 5.3% and 9.0%.
Meanwhile, among rated orders during the intervention period, between about 2% and 10% of initially inappropriate orders were changed or canceled across conveners—with the exception of one convener that had an 18% cancellation rate.
The conveners CMS selected to recruit physician practices for the demonstration were Brigham & Women's Hospital, Henry Ford Health System, Maine Medical Center-Physician Hospital Organization, the University of Wisconsin-Madison and National Imaging Associates.
“It’s unfortunate that there’s a lot of noise in the MID report and very little signal,” said Mardini. “It will be very easy for people to read into the report whatever they want to see, whether it’s that decision-support