Among the 65% of clinicians who ordered imaging exams during the Medicare Imaging Demonstration (MID) and gave no specific reason for placing the order, how many could have used clinical decision support (CDS) but chose not to? Was their rationale for bypassing the system reasonable? And how many were non-doctor proxies ordering on behalf of physicians?
Safwan Halabi, MD, director of imaging informatics and a senior staff radiologist at six-hospital Henry Ford Health System in Michigan, one of five MID conveners, briefly addressed those issues with RadiologyBusiness.com.
RadiologyBusiness.com: Some observers were taken aback to see that 65% of orders in the demonstration could not be evaluated for appropriateness—these were placed in the “not covered by guidelines” bucket. Did you notice any patterns driving orders into that category?
Halabi: There’s a huge gap in a lot of follow-up studies for, say, cancer patients. A lot of these patients already had diagnoses of certain elements; they weren’t coming in with a clinical scenario such as, say, shortness of breath or abdominal pain. These were people who have had a known metastatic disease or things like ringing in the ears. So it was really those subspecialty-type things in oncology and in other specialties—those were the things that we got complaints about from our physicians, saying “I can’t find the disease or scenario that I’m looking for.”
RadiologyBusiness.com: Given that a relatively high percentage of orders could not be evaluated, was the demonstration project’s credibility somewhat compromised? Or will it still carry weight going forward?
Halabi: I think it will [carry weight], because what the sampling exposed was not only the holes in guidance but also the holes in implementation. That will help in determining what education and what change-management process needs to occur for this to be successful. A lot of the conveners were not prepared to implement this on a wide scale. It was delayed from the initial start date, but I think that the biggest lesson learned is what you need to have in place to really make this successful education-wise, training-wise and also technically. The user interface has to be something that fits within the workflow but doesn’t exacerbate the inefficiencies that are already there.
RadiologyBusiness.com: In an earlier discussion, you talked about problems with non-physician clinicians doing a lot of the ordering. Did the demonstration shine a light on the pros and cons of that common practice?
Halabi: We knew a lot about proxy ordering from the start. And we had a lot of trouble deciding whether or not to give ordering providers that “out” from selecting guidelines. But the thing is, the guidelines and knowledge base were anemic. There were many holes there. It was almost impossible to fill in those gaps before implementation. So I think of this project as sort of the 1.0 version of decision support. It was very bare-bones.
One of the big issues is that even physicians, and even radiologists, don’t know all the guidelines or are not well versed in them. And I can’t imagine having a nurse practitioner or a physician assistant understanding not only what guidelines exist but also what to search for. That’s asking a lot, especially if you did not have an intense education in this. We did train people in how to use the system, but we didn’t go to the step of saying, “this is how you should order” or “this is the methodology” or “this is the way that you select guidance and search for things.” And the other aspect of that is that we didn’t have a lot of synonyms to the nomenclature that people would use on a day-to-day basis. If you look in the Yellow Pages, you might look under rubbish removers or garbage collectors to find the same service. We didn’t have that.
RadiologyBusiness.com: Will observations you made about decision support during the demonstration change anything you do at Henry Ford?
Halabi: Yes, absolutely. We are currently in that process. At the end of this demonstration, we actually went to a different electronic health record. We went from a homegrown system to the Epic system. And when we went to the Epic system, we decided not to continue decision support at that time, just because we felt it was not ready for prime time, full integration with our CPOE system. Epic has really shaped up to integrate fully with decision support so that physicians can change their order on the fly and get the guidance that they need and the scoring that we need to make sure that they order the right study. A lot of these groups, including Epic and the [National Decision Support Company with its] ACR Select system, have gone ahead and learned from past experiences. And so we now have version 2.0 of decision support coming and then the Epic 2014 iteration.
RadiologyBusiness.com: Did the demonstration project help build greater physician satisfaction with Epic?
Halabi: I think that where there is dissatisfaction, it comes from individuals who are used to their workflow and have really not been in the electronic space. We implemented pretty smoothly, because we were already up with EHR. I think it really hampered the individuals who just didn’t want to learn a new system or a new format. But overall, I’m actually shocked that it occurred so smoothly. I think what made it so successful was that training piece. We had to take a lot of people off-line to implement Epic, and it was our individuals. We took people, many FTEs, off-line to implement that. So its success had a lot to do with our having a vested interest in making sure everybody knew how to use it. So we did see the backlash, but it was definitely not to the scale of places that didn’t have an electronic presence to begin with.
RadiologyBusiness.com: So the buy-in on Epic dovetailed with participation in the demonstration?
Halabi: It did. Basically, as we phased out of the demonstration, we went into the new electronic health record. We did not expose those users to decision support at that time, but we are planning on going live in March of next year with the Epic version of decision support. We’re happy that people already had exposure to it, and that was one of the main reasons why we went into the imaging demonstration. We were going that route, and this just catalyzed our opportunity to expose people to the appropriateness criteria and get people used to getting these best practice advisories as they order, whether it’s a lab or an imaging study or even going down a clinical pathway in the electronic system.
RadiologyBusiness.com: Was there anything in the final MID report to Congress that surprised you, pleasantly or otherwise?
Halabi: I was happily surprised to see in the conclusion that, with all the holes in guidance and having that 65% of orders deemed not scoreable, that they still took the advice of recommending that these systems be implemented or enhanced or installed in future iterations of an electronic health record. And it’s very interesting how that dovetails into the 2017 legislation that mandates for Medicare a decision support score to accompany any order that we bill to Medicare. I was pleasantly surprised to see that, at the end of the day, they (CMS) recognized the potential of this process.
Going into the future, one big question we’re going to have to answer is, when you go to the hospital to get any test done, who do you want ordering those tests for you? The reason why we’re looking at the issue of over-ordering is because maybe we’ve let things go without looking at who’s ordering and what they’re ordering it for. You always hear that we get poor histories when we get these orders. It all ties into doing the right thing from the beginning so that you get a good product at the end.