In a world of networked medical devices, it’s not hard to imagine a radiology-heavy cyberattack that is not only malicious but also ingenious. Picture this:
A fraudster adds pixels to a patient’s normal brain scan such that it shows a “tumor,” then bills a poorly protected insurer for any number of follow-up clinical “services.”
A stickup artist uses ransomware to freeze every clinical workstation within a hospital, including PACS servers, grinding all critical patient care to a halt.
A hacker tampers with the measured radiation dose delivered by a CT scanner to a hospitalized patient just to show that he can, triggering a life-threatening emergency.
If these scenarios seem like a stretch, what’s no fiction are the serious breaches of personal health information occurring with alarming ease and regularity in hospitals, clinics and private physician practices across the country.
The scale of the threat came to the attention of the general public in 2017, when the WannaCry ransomware attack infected more than 200,000 computers across 150 countries. It hit Britain’s National Health Service particularly hard, forcing hospitals across the country to close down operating rooms and divert emergency procedures.
Considerably smaller but of potentially greater interest to radiology was the data breach of 800 patient records in early 2018 at Diagnostic Radiology & Imaging in Greensboro, N.C. This incident was touched off by a “phishing” attack. It worked when an employee unwittingly divulged names of patients together with descriptions of imaging services they’d received.
There was also the 2014 theft of confidential health information on nearly 97,000 patients from NRAD Medical Associates on Long Island, N.Y., by radiologist Richard Kessler, MD, whose brazen raid required nothing more than downloading data from an external hard drive.
The explosive growth in the interconnectivity of medical devices and systems—fueled by the commendable goal of sharing information among collaborating clinical teams and disparate healthcare sites—has exposed a viper’s nest of vulnerabilities around securing medical information that radiologists are struggling to effectively address.
On top of all that has come the rush to incorporate advanced applications of machine learning and other forms of AI into clinical and administrative workflows. In fact, the present AI revolution is only serving to compound the threat to health data privacy, which some thought would soon enough become a blip in the wake of the Health Insurance Portability and Accountability Act (HIPAA) of 1996.
The problem in 2019 is that computing technologies of unforeseen reach, speed and power may give bad actors highly refined ways to uncover patterns of care and exploit the results.
“We’ve seen a culture change from 10 years ago, as hospitals and radiology departments now recognize they can be hacked,” says Nabile Safdar, MD, MPH, vice chair for imaging informatics at Emory University School of Medicine in Atlanta. “But it continues to be a challenge for facilities to find the resources, whether they’re capital or human, to defend against these risks by creating good security hygiene, policies and culture. Radiology departments have a lot of devices connected to the internet, and the problem for many is they’re not sure where to start.”
Just how serious is the security gap within radiology departments? To find out, Oleg Pianykh, PhD, director
of medical analytics at Massachusetts General Hospital and an assistant professor of radiology at Harvard Medical School, worked with several colleagues to develop DPing, a network scanning tool to test remote radiology servers for their readiness to share medical image data with an external out-of-network computer.
One might say that, to head off an automated hack, they created an automated hacker. In tests, DPing used DICOM networking protocols and transfer syntax to speak the DICOM language to remote servers. The aim was to see if the servers would reply and engage in an actual exchange of medical data.
What the Harvard sleuths found was startling. Their scan of the entire worldwide web of networked computers and devices found a total of 2,774 unprotected radiology or DICOM servers worldwide. Of these, 719 were wide open to the outside world at their default DICOM ports and settings.
This meant the servers were willing to communicate with strangers by accepting a “handshake” as simply as your web browser often does when you navigate to a website. Access doesn’t get much simpler than that.
The team detailed their findings in 2016 in the American Journal of Roentgenology. One of the most notable takeaways from their work was that the U.S. led all countries in the absolute number of open servers (346), although this was not entirely surprising since no other country had more DICOM servers than the U.S.’s 1,335. And while the U.S. was down the list at No. 15 in the ratio of open servers (26%), the investigators found this “hardly flattering,” noting it still constituted “an enormous security hole.”
Just as disturbing to Pianykh has been finding no improvement in security over three consecutive years testing the vulnerability of digital radiology.
“The standards running contemporary digital medicine were conceived in the late 1980s with no security built in,” Pianykh tells RBJ, referring to DICOM and HL7 International. “That really hasn’t changed much. Even when security addenda came along years later, nobody implemented them—or sites that tried to catch up realized it was too late.”
The unfortunate result, he adds, is that security-sound DICOM clinical data and devices are still “largely theoretical,” having yielded instead to generic solutions and protocols such as firewalls, virtual private networks and identity access management. While manufacturers are partially to blame, he says, so are hospitals.
That’s because, when installing devices like scanners, workstations and digital archives—including those with DICOM software in nearly plug-and-play mode—many institutions don’t take simple and appropriate steps, such as securing port settings by immediately changing the factory defaults, which are often published and thus easily findable.
The continued state of unsecured medical devices was further highlighted in an analysis authored by researchers at the eHealth Research Group and Security Research Institute of Edith Cowan University in Australia and published in Medical Devices under the headline “Cybersecurity Vulnerabilities in Medical Devices: A Complex Environment and Multifaceted Problem.” The authors, Williams and Woodward, flatly concluded that medical devices “do not have basic security features.”
Among the specific fault lines they found were a widespread lack of timely software updates and patches, legacy operating systems and software that left vulnerabilities like misconfiguration and security holes and lack of awareness of cybersecurity issues and poor security practices by users. “Compromised medical devices,” they underscored, “can be used to attack other sections of the healthcare organization network.”
Or, as Harvard’s Pianykh puts it, “as long as you have one device that’s not secure, it can be used as a back door to get into the rest of your network.”
Manufacturers have not been oblivious to the cybersecurity menace knocking at their door. Some formed, for example, the Medical Device Innovation, Safety and Security Consortium (MDISS), comprised of medical device vendors, healthcare organizations, universities and other stakeholders, to develop technologies and solutions aimed at making devices safer and more secure. And the National Electrical Manufacturers Association (NEMA) offers several guidance documents relating to cybersecurity, including PS3.15 of the DICOM standard, and the Manufacturer Disclosure Statement for Medical Device Security (MDS2), which device makers can use to perform risk assessment for a customer.
But experts in the field are adamant in stressing that, over the years, cybersecurity has too often taken a back seat to expedience.
“Some device companies are starting to take security seriously,” says James Whitfill, MD, senior vice president and chief transformation officer for the HonorHealth system based in Arizona, “but many others are coming to the field late.”
What’s more, attempts to correct security breaches and flaws with patches and software updates have proven to be an imperfect solution because of the lag between patch release and application. The WannaCry ransomware, for example, was shown to attack vulnerabilities that had been patched two months earlier by Microsoft for computers running current versions of its Windows operating system. As Whitfill points out, many hospitals have CT scanners and other devices that are 10, 15 or 20 years old with out-of-date operating systems.
“Patching these devices and keeping them up to date is a big challenge, if not impossible,” says Whitfill, who chairs the board of directors for SIIM and is a clinical assistant professor at the University of Arizona College of Medicine.
Bottom line, patching tends to be a game of catch-up from the last cyberattack. And even when corrective patches are issued by software companies, the experts agree, they are often put away in drawers and ignored by end-users, providing a welcome mat for future cyberintruders.
AI Brings Benefits, Dangers
Artificial intelligence and machine learning have raised the cybersecurity stakes even higher. In practice, AI/ML is a dual-edged sword. Some AI applications can help protect companies from cybercrime by sniffing out potential threats and determining if a security alert is real or bogus. AI can also serve to de-identify clinical test results that are being shared outside a practice or a hospital’s walls. In so doing, it may reduce the need for high-priced and hardto- find security professionals.
On the darker side, the new technologies significantly widen the exposure window of users and can easily be deployed for nefarious ends.
“What really matters is the size of the area of potential attack on data that you need to protect,” explains Christopher Roth, MD, associate professor of radiology at Duke University School of Medicine.
“With more data moving between centers for research and being shared between institutions and companies, not everyone shares the same degree of risk or the same security practices,” adds Roth, who presented at RSNA 2018 on the “bare minimum cybersecurity hygiene” for radiologists. “That could be risky for practices that don’t have in place great de-identification procedures for their own data.”
Here Roth is referring to a simple but easily overlooked reality:
Even data believed to be anonymized and stripped of personal information, including all names and addresses, may be vulnerable due to private DICOM tags or images of scanned documents or burned-in pixel data.
This concern was given credence by a recent study led by University of California Berkeley engineer Anil Aswani, PhD, who found that AI makes it possible to identify individuals by learning patterns of daily step counts—as gathered by activity trackers, smartphones and smartwatches—and correlating that info with demographic data.
This finding suggested to Aswani that, while HIPAA regulations are designed to keep healthcare data private, what they leave under-covered is sometimes enough for bad actors to reassemble in illegal or unethical ways. Aswani believes this point of weakness will only grow.
Safdar likewise names re-identification as an issue of increasing concern for medical professionals. Sometimes disparate blocs of information like gender, age, medical imaging findings and other information can be recombined to unmask actual patient identities.
“There are companies that have amassed serious amounts of data on individuals and are ready to sell them,” Safdar explains. “It’s a demonstrated concern we need to start thinking about.”
Mobile Means Susceptible
Electronic medical records have created perhaps an even broader danger zone for hospitals and radiology departments, owing to their constant exchange of data. While a boon to seamless patient care, EMRs are manna in the hands of hackers who can sell these stolen records on the dark web for prices that far exceed pilfered social security and credit card numbers.
“If I have somebody’s medical record, in all likelihood I have all the information I need to steal their identity,” says Whitfill, adding that stolen records can be used to submit false claims to insurance carriers.
Making the job of hackers immensely easier has been the proliferation of mobile devices, which radiologists routinely use to view images remotely, open sensitive files or share patient information. While this capability has given physicians the luxury of convenience, it also poses the danger of connecting to an unsecured network and allowing outsiders with or without malicious intent to snoop on or outright steal confidential data.
Another layer of exposure resides in home networks and Wi-Fi hot spots—ubiquitous at public venues like airports, hotels and coffee shops—which allow open access to your computer or phone. And because much of the information that courses through these public networks is unencrypted, it’s not difficult for a cybercriminal with a modern-day arsenal of tools to eavesdrop on your conversation or worse, then set up a rogue network at the same location, mopping up the internet traffic of anyone who unknowingly joins it.
Consider the consequences for a radiologist who sends, say, a patient image from her la top to a colleague over an unencrypted or rogue Wi-Fi network. Even if the image is stored in the cloud, an experienced hacker could break into the cloud account and filch the sought-after goods.
Your Best Defense Now
There are steps physicians and healthcare sites can—and should—take to mount a potent cyber defense. Some, of course, are on the shoulders of IT departments in larger centers to initiate, like establishing firewalls; keeping PACs and other imaging systems on their own networks, isolated from other clinical information used in the practice; and employing multi-factor authentication to positively identify anyone attempting to access your network through challenges and responses, logins, smart cards or biometrics.
Other IT-driven safeguards might include backing up and safely storing data; installing and updating anti-malware and antivirus protection; setting up the network with encryption that meets the most stringent standards; and conducing annual security audits or assessments, preferably through an outside professional group, to pinpoint any vulnerabilities and how to correct them.
Still, as Safdar emphasizes, “It’s an oversimplification to think that security is just an IT task. A lot of security is about educating individuals who aren’t really expected to have deep IT knowledge. It’s educating them about phishing attacks or helping them to understand the risks of emailing in an unsecured way. And it’s about the importance of getting a business associate’s agreement when they deal with a vendor who’s going to handle patient data.”
Echoing that thought is Roth at Duke. “Hospitals spend millions on people, applications, processes and infrastructure for securing patient data onsite and offsite,” he says. “Still, we’re the ones who bring infected laptops or phones that automatically log into hospital networks housing protected health information, and we’re the ones who open benign looking malware when we’re on call and moving too quickly and not thinking. And we’re the ones with unencrypted laptops that get stolen and the ones using redundant, easily guessed passwords.”
In short, Roth argues, the majority of cyber risks arise from not doing the easy stuff well. “By some estimates, the number of attacks has more than doubled since 2010, so we have to be on guard more than ever,” he maintains. And that means “we need to take as many decisions and potential mistakes out of the hands of end-users as possible.”
Coordination and Consistency
That may boil down to maintaining good cybersecurity hygiene starting at the human level. For physicians connecting to the internet via outside networks, it might mean keeping their messaging and attachments secure through a downloadable app that can automatically erase all information within seconds, minutes or days. It might also mean using a virtual private network (VPN), a sort of tunnel within a public network offered through a VPN service provider that creates a safe, encrypted connection that only authorized users can access.
And in the case of a home network being used as an extension of your clinical office, good security hygiene means changing the default administrative password on your router to a strong, unique version that attackers can’t crack. No less critical is encrypting your wireless signals to prevent other computers in the area from intruding. (See sidebar article “10 Ways to Defuse Cybercrime.”)
For security to gain the greatest traction, however, Pianykh with Massachusetts General Hospital argues it needs to move beyond its current “do-it-yourself” approach to an industry-driven initiative able to proactively identify potential security breaches—not unlike the stress tests the government administers to banks to determine if they could weather a financial crisis.
“Why haven’t we done this already?” Pianykh asks. “There needs to be a nationally structured, consistent effort to fix our security holes and to properly educate people. A complex system such as healthcare demands nothing less.”
As others point out, tried-andtrue partnering between the IT and clinical departments of a hospital or imaging center can also work wonders when it comes to protecting not just digital networks but also patients themselves. “
“As physicians, we took the Hippocratic Oath to do our patients no harm,” says Whitfill, who still does some hospital rounds while overseeing administrative functions at HonorHealth. “That requires us to work closely with—and not around—our security partners. And they need to understand that we need solutions that are easy to use and compatible with our workflows. Only when we have that kind of relationship will we truly be able to come to grips with cybersecurity threats.”