MIPPA Accreditation Countdown: ACR, IAC, or Joint Commission?
In 2008, when Congress passed the Medicare Improvements for Patients and Providers Act (MIPPA), the January 2012 accreditation deadline for providers of advanced imaging seemed distant, but it’s near enough to call for action now. For those advanced diagnostic imaging services (ADIS) providers still unaccredited, the choices available in accrediting bodies have also become clear since the law passed: The ACR®, the Intersocietal Accreditation Commission (IAC), and the Joint Commission are the only accrediting organizations approved by CMS to handle MIPPA accreditations. Under MIPPA, only IDTFs and physician suppliers billing for the technical component of MRI, CT, PET, or nuclear-medicine exams under the Medicare Physician Fee Schedule (MPFS) need to be accredited. Hospitals that bill CMS for advanced imaging are not involved in MIPPA-mandated accreditation, since they are accredited in other ways. The MIPPA accreditation requirements had long been sought by the ACR. Pamela Wilcox, RN, MBA, assistant executive director for quality and safety for the ACR, says, “The ACR was heavily involved in getting the accreditation language into MIPPA. Getting it passed in 2008 took two years. We’d been talking to Congress much longer, but this initiative was a two-year effort.” One goal of ACR lobbying was to impose uniform quality standards on nonradiologist imaging providers, such as cardiologists and orthopedists. “We believe if Medicare is paying for ADIS, then it ought to be getting high-quality images,” Wilcox says. The Mandate Mandatory accreditation by January 1, 2012, is the heart and soul of the MIPPA provision on imaging standards—section 135(a)—regarding outpatient providers. After that date, unaccredited ADIS providers billing Medicare for the technical component of the designated procedures won’t be paid. The law also lays out what all the accrediting bodies agree are basic compliance standards:
  • the qualifications of nonphysician personnel must be specified and met;
  • the qualifications and responsibilities of medical directors and supervising physicians must be spelled out and documented;
  • procedures to ensure the safety of staff and patients must be in place;
  • procedures to ensure the reliability, clarity, and technical quality of diagnostic images must be in place and must be verified;
  • there must be methods in place to assist patients in obtaining imaging records; and
  • imaging centers must have a way to notify (and must notify) CMS of any change in imaging modalities that takes place after accreditation.
These are the six standards that accrediting bodies must establish as having been met by applicants. For the ACR, the key provision has to do with reliability, clarity, and technical quality of images. The IAC also stresses this requirement, but it relies more heavily on a review of physician-by-physician radiology reports and complex procedural protocols to make sure that standards are met. The Joint Commission takes what it calls a more holistic approach, focusing on facilities’ systemic characteristics and quality-control programs. It lets applicants determine their own methods of achieving image quality, although it does validate them. There are major differences among the three accrediting bodies. Potential applicants should understand these differences in order to select the most suitable path to accreditation. The ACR ACR accreditation might be considered the gold standard by many in the industry because of its rigor in assessing image quality. While the IAC and the Joint Commission might dispute that characterization, it does focus on what the ACR plays as its trump card: the quality of the images required to achieve accreditation. The ACR’s accreditation proceeds scanner by scanner, with each machine required to produce high-quality scans of actual patients, in addition to undergoing performance analysis using third-party phantoms to test each machine. “The ACR’s program evaluates actual clinical images from actual patients. The IAC does that too,” Wilcox says, “but the IAC does not use phantoms. We think phantoms are critical. You can’t see if the equipment is operating optimally from patient films.” That’s because patients vary physically, and radiologists can’t always see machine discrepancies on clinical images. With phantoms, where the objects being scanned are the same, machine discrepancies do show up, Wilcox says. Settings that might lead to overexposure, for instance, would show up with phantoms. The ACR requires that medical physicists check each modality annually; so does the IAC. The ACR, like the other accrediting bodies, is required to verify the qualifications of physicians and of nonphysician medical personnel. According to Wilcox, the ACR uses existing certifying bodies—the American Registry of Radiologic Technologists®, for instance—to determine whether qualifications have been met. “We don’t accredit individuals,” she says. Technologists and physicians at each facility must demonstrate that they meet certification standards. In something new for the ACR, MIPPA requires the accrediting body or CMS to conduct unannounced site visits, so the ACR is now doing that, Wilcox says. “In each modality, we do a couple dozen unannounced site visits a year,” she says. “That’s the stick hanging over your head.” This is a small number, of course, compared with the figures for the Joint Commission, where unannounced site visits are the heart of the program. According to the ACR, applicants have 45 days after they apply for accreditation to submit images to be reviewed by ACR radiologists. Applicants—using CT as an example—must submit three exams of actual patients, one each from the head/neck, chest and abdomen. A list of appropriate studies is set forth by the ACR, and applicants can choose which of these exams they wish to submit. These studies, along with phantom images from the same scanner, are reviewed by ACR peer reviewers. The ACR advises allowing four to six months for these reviews. If the images are judged to meet standards, if the applicant’s radiology reports meet ACR guidelines, and if the other standards set forth by MIPPA are met, the ACR issues an accreditation. All the accrediting bodies accredit for three years. If an ACR applicant is found to be deficient or has failed, the applicant has 15 days to appeal in writing, resubmitting the original images only. Appeal results are issued in 30 to 45 days. Wilcox notes that earlier campaigns by private insurers to require accreditation have already resulted in most MRI providers being accredited. For CT, she says, maybe half the pool remains to be accredited. “As of July, we had 4,500 facilities that have applied, and 4,000 of those have achieved accreditation. I guess the 4,500 is about half those that need to be accredited. There are another 4,500 to go on CT,” she says. Wilcox adds that the ACR has a big lead among the three accrediting bodies in numbers accredited, since it accredits most radiology clinics and outpatient providers that have radiologists doing the imaging. The ACR has a program for accrediting nonradiology providers, but many of those applicants, Wilcox says, turn to the IAC. The IAC The IAC is sometimes seen as an up-and-coming organization that is moving to challenge the ACR, particularly by creating more user-friendly online accreditation applications. The ACR’s 45-day inflexible deadline for image submission following application is like signing up for boot camp, some might feel. The IAC is more focused on nonradiology imaging providers, it accredits more machines found in physicians’ offices, and it is more representative of medicine at large, rather than of radiology. It also has the reputation, in some circles, of being more progressive in customer service. That IAC accreditation fits well with nonradiologist imaging makes sense, considering the IAC’s history. Sandra Katanick, RN, RVT, CAE, is the IAC’s CEO. Katanick has been with the IAC for nearly 20 years and is its first and only executive director/CEO, she says. The IAC does not lobby or advocate legislation, Katanick says. It is built around 30 sponsoring organizations—medical societies and technical associations—that provide representatives to its board of directors. The sponsors provide no financial support. “We are totally funded through application fees,” Katanick says. “Accreditation is our only business.” Early on, the ACR was an IAC sponsor, Katanick notes, but when the ACR expanded its own accrediting program, it dropped out as a sponsor because it had a conflict of interest. Katanick disagrees with descriptions of the IAC as an accrediting body mainly for nonradiologist imagers. “We accredit a high percentage of radiology-based clinics,” she says. She agrees, however, that many nonradiologist imaging providers do become accredited through the IAC. “The ACR program is phantom based and image based,” she says. “Our program is very end-results oriented. We look at images, but we consider the final report to be equally important. That’s what is used to treat the patient. We have very stringent reporting requirements, and we make sure all the reports contain the same components.” Standardization is a major theme for the IAC. “Where we started in developing accreditation programs—and why—was the need for standardization, so we try to make sure that every reader in a facility reads in the same way,” Katanick says. The IAC requires its applicants to submit reports from different physicians so that all image interpreters are eventually reviewed. The IAC also requires applicant imaging providers (laboratories) to submit cases for review that demonstrate pathology, in order to show that physicians understand the relevant pathologies, Katanick says. “Four out of the five submitted cases have to have pathology,” she adds. “Our reviewers are accredited physicians and technologists. They rate the images, the reports, and the quality control in the laboratories.” The accreditation that the IAC grants is not only modality specific, but specific to the types of studies for which the machine will be used. The IAC has separate divisions for each modality, and each division accredits in its own diagnostic area, Katanick says. CT, for instance, is handled by the Intersocietal Commission for the Accreditation of Computed Tomography Laboratories. MRI has a similar division, as does nuclear medicine/PET. “The majority of our CT laboratories are in ear/nose/throat practices that do sinus and temporal-bone scans,” Katanick says. “The system they use is different—more like a volume cone-beam scan in dental radiography. The FDA has classified these systems as CT, so they fall under MIPPA.” The IAC also is more protocol driven than the ACR. Katanick says, “In the protocol section of the standards, the laboratory must complete a written description. A lot of them are the OEM default protocols. We require a physician to be involved in changing manufacturers’ default protocols.” The IAC tracks appropriate utilization for its applicants, based on standards that it has developed for each modality. Katanick says, “We don’t say 70% of studies have to be appropriate, but we do require that the laboratories measure it. We say 30 consecutive patients have to be monitored as to whether the use is appropriate, inappropriate, or uncertain.” Katanick says that, as a patient, she would want to see a facility with IAC accreditation because the IAC pays more attention to final reports and quality control than the ACR does, but either choice is better than no accreditation, she adds. “I would not have an imaging exam done by someone not accredited by the IAC or the ACR,” she says. The Joint Commission The Joint Commission, best known for its accreditation of hospitals and health systems, is a latecomer to the MIPPA accreditation of imaging providers. The Joint Commission’s programs focus primarily on outpatient imaging venues that have some connection to the hospital industry: That is the Joint Commission’s home turf. Michael Kulczycki, MBA, CAE, executive director of the Joint Commission’s Ambulatory Accreditation Program, reports that one are of focus “is hospital joint-venture imaging centers that bill under the MPFS. We also get strong interest from freestanding imaging centers, especially those with a corporate entity involved, and from multiservice ambulatory providers—the large multispecialty group practices. We accredit a number of them. Those are the three we think of as natural fits. We would accredit single physicians if they came to us, but the likelihood of that is lower on the scale,” he says. The Joint Commission accredits for MIPPA under its ambulatory-care division. “We have more than 1,840 customers we accredit in ambulatory care, and more than 100 of those are imaging customers,” Kulczycki says, adding that 40 of those are teleradiology providers. Teleradiology companies often build their customer base around hospitals. The Joint Commission differs from the ACR and IAC in that it always sends out an accrediting team to the applicant’s site. For the initial site visit, the applicant is given a readiness window, Kulczycki says, but all subsequent visits are unannounced. The Joint Commission has developed an elaborate set of elements of performance with which sites must comply. “Of the 1,100 broad ambulatory elements of performance, 20% are related to CMS standards for imaging,” Kulczycki says. Only three elements of performance were added for MIPPA accreditation, he adds, all in the area of the environment of care. In addition to reviewing performance, the Joint Commission’s accreditation for MIPPA involves the use of patient-tracer techniques resembling those that the commission uses in its ambulatory accreditations, Kulczycki says. The patient is traced through the cycle of care, and all points of care are examined for quality, safety, and treatment. “We also have system tracers,” Kulczycki says, “so at the typical imaging center, the site surveyors might spend two or three days looking at how the site collects data, its performance improvement, and its infection prevention as well.” The commission also looks at staff competence, training, credentialing, and privileging, Kulczycki adds. All of the Joint Commission’s survey teams would include at least one physician. “We have also hired ADIS specialists, radiologists, and medical physicists or radiologic technologists who get assigned to the team, in addition to the physician surveyor,” he says. If the applicant site is small, the physician surveyor might be on site for two days and the ADIS specialist, for one day, he adds. For larger imaging centers with multiple modalities at each site, the survey team might include more than one physician, along with specialists who would document at least one example of each modality, Kulczycki says. “The patient images will not be evaluated,” he says. “Our process is not a peer-review process. We want to know the machine-evaluation process that the facility has in place. We look at quality control. It’s the role of the ADIS specialist to assess whether the program of the provider is an effective one.” Kulczycki notes that the Joint Commission’s accreditation regimen doesn’t end with completion of the initial survey. Each year, accredited sites are required to conduct periodic performance reviews. These are done electronically and might also involve phone contact. Kulczycki says, “They must do the periodic performance reviews. None of the others have this.” Kulczycki says the need for MIPPA accreditations nationally probably affects 5,000 to 6,000 facilities, but he adds that the Joint Commission has not set a target for those that it might accredit. “We pursued the designation as an accreditation body to preserve service for our existing customers,” he says. “We wanted to design our accreditation process so that it was better suited for multisite, multimodality facilities. That’s our sweet spot.” Accreditation Costs Now that accreditation is mandatory under MIPPA, providers must build the fees for it into their budgets. Those fees aren’t small. Roughly speaking, accreditation programs from the ACR and the IAC are comparable in cost (at $2,400 per modality accredited), with discounts for accrediting more than one machine at a site. These fees cover the three-year term of the accreditation. The Joint Commission uses a more complicated formula. According to Kulczycki, for facilities with up to 5,000 annual patient visits, the fee is $8,510; for up to 50,000 visits, the fee is $9,550; and for up to 120,000 visits, it is $12,640. At levels of more than 120,000 visits per year, contract pricing is used. Branch site fees are also imposed. According to the Joint Commission, for facilities with one to four branch sites, the fee for the on-site survey is $1,180. For five to eight branches, it is $2,335, and for nine to 12 branches, it is $3,510. Kulczycki notes that these fees are payable over the three-year span of the accreditation, with 60% paid the first year and 20% paid during each of the two following years. “If you’re a single center with a single modality, the others are more cost effective. For multimodality facilities, we’re at parity; for multisite, multimodality providers, our cost structure is less,” Kulczycki says. Even though facilities that don’t bill CMS under the MPFS don’t need MIPPA-mandated accreditation, competitive pressures will probably force them to obtain it, some industry analysts predict. The MIPPA accreditation mandate for imaging centers might, therefore, impose quality standards on imaging providers more broadly than the law strictly requires. Higher quality, more broadly enacted, is unlikely to be a bad thing. George Wiley is a contributing writer for Radiology Business Journal.