Abstract
Ready or not, the Internet of things (IoT) is here. No longer just a buzz term, it’ll continue to grow at an unprecedented pace over the next few years expecting to reach over 25 billion connected devices by 2020. History shows us that most fast growth technology solutions focus on solving business problems first and security is an afterthought. Unfortunately, IoT is following the same trend. Most IoT devices, apps, and infrastructure were developed without security in mind and are likely going to become targets of hackers. According to some security experts, major cyberattacks against the IoT devices are looming. According to the FBI, criminals can gain access to unprotected devices used in home health care, such as those used to collect and transmit personal monitoring data or time-dispensed medicines. Once criminals have breached such devices, they gain access to any personal or medical information stored on the devices, as well as the power to change the coding that controls the dispense mechanism of medicines or health data collection. This can result in major health issues and potential loss of lives. Are organizations ready to protect themselves? What are the key vulnerable points? There are various steps that companies can take to raise the barrier. In this article, we’ll talk about the background, issues, potential attack vectors liable to be hacked, protection strategies, and more.
Keywords
As medical devices gain the same kind of pervasive network and Internet connectivity that permeates the rest of our world today, patient care has never been better. The interconnectedness of implantable, wearable and, soon, even ingestible devices with diagnostic machines and Internet-driven applications opens up a whole new world of health care flexibility. As a result, the reach of the practitioner grows, putting consistent observation and control at their fingertips. At the same time, patients gain greater peace of mind and access to care. But with all of these benefits comes a darker downside that the industry can’t afford to ignore.
As the complexity of connected medical devices accelerates, so too does the risk of inadequate cybersecurity design. As it’s true for any revolutionary technology, for all of the technological advancement that has been poured into digital health care devices over the past decade, commensurate cybersecurity defenses have not kept up the pace. The lag has led to a huge protection gap in the medical device market, because many of the added features and development processes introduced in recent years put these devices at higher risk of potential attack. That largely has to do with a concept that information security professionals like to call “attack surface area.” This refers to the available functions in a system that can be used by the bad guys to break the system, make it behave in a way that was never intended or even to take complete control of the system.
Device features that go hand-in-hand with Internet of things (IoT) like always-on connectivity, simplified access to large data stores, improved integration with other devices and applications, and the use of affordable commoditized hardware are all extremely convenient for users and health care practitioners. But these features also make devices more convenient to hack. That’s because they make it easier to carry out remote attacks, to steal large amounts of data and to target many devices at once.
That’s not to say that the features themselves are necessarily the problem. But when combined with flaws in the bits and bytes that drive the back-end software programming and configuration of the device, these extra bells and whistles become dangerous.
It’s this fatal combination that ultimately enlarges the attack surface area of any given device. In particular, the combination of software vulnerabilities with ubiquitous connectivity is an especially toxic one because it enables hackers to find new and ingenious ways to attack devices without physically touching them.
Unfortunately, software vulnerabilities and misconfigurations are widely pervasive in today’s crop of medical devices, as evidenced by a growing body of research showing that a significant number of medical devices are imminently hackable. The weaknesses found so far present a great risk to patient safety, privacy and general well-being.
Research shows that devices can be manipulated to administer fatal doses of insulin or drugs, to steal patient data or to otherwise malfunction. These scenarios are not science fiction or movie hacking. They’re very real and without intervention it is only a matter of time before medical device exploitation happens to disastrous effect. The following offers up some evidence from recent discoveries that shows how imminent the threat is and what the industry needs to do in order to reduce the overall attack surface area for modern medical devices.
A Recent History of Device Insecurity
The threat of medical device weaknesses hasn’t appeared overnight. The last 5 years has shown a steady acceleration of attention both by security researchers and attackers learning to figure out how flaws can be manipulated to appropriate control over personal devices, hospital diagnostic machines and other medical appliances. Perhaps no piece of research has been quite as influential as that done by Jay Radcliffe in 2011 on weaknesses found in insulin pumps. His presentation at the annual Black Hat USA security conference 1 helped stimulate interest by researchers and regulators in these hidden deficiencies, and has galvanized the security industry to step up its pressure on medical manufacturers and practitioners to improve their security postures.
A diabetic himself and a security professional by day, Radcliffe had turned his nighttime hobby hours to tinkering with his own insulin pump to see whether it was hackable. The culmination of his work was a live demonstration showing that it was possible to remotely deliver lethal doses of insulin to patients. 2
The presentation and ensuing media attention spurred attention from both the Government Accountability Office (GAO), which a year later released a report encouraging the Food and Drug Administration (FDA) to take better oversight control over information security practices around medical devices. 3 This proved to be one of the drivers that led to 2013 guidance from the FDA medical device cybersecurity. 4 The FDA even brought Radcliffe and other security experts in to advise it on what actions it needed to take. 5
Nevertheless, there still exists no actual binding regulations around device security. And in the meantime, the pace of new vulnerability discoveries has held strong in the 5 years since the insulin pump presentation. To this day there are still regular announcements of new discoveries of medical device security issues with similarly startling ramifications for patient safety. For example, just a year after Radcliffe presented at Black Hat, another researcher named Barnaby Jack took the stage at the same show and demonstrated how it was possible to take advantage of vulnerabilities in a pacemaker to deliver a deadly, 830-volt shock by using a laptop from up to 50-feet away from a potential victim. 6
And just last year, prolific health care security researcher Billy Rios announced that he found that serious vulnerabilities in a number of drug infusion pump models manufactured by Hospira made it possible for an attacker to remotely administer fatal drugs to patients without a trace. 7 The discovery was significant enough to trigger a security alert from the FDA warning about the risk and to spur further research on infusion pump software safety. 8
In the meantime, Rios has moved on to other research. This year he worked with another independent researcher, Mike Ahmadi, to find over 1400 cybersecurity vulnerabilities in third-party software used CareFusioin Pyxis SupplyStation devices, which are automated and networked supply cabinets used by hospitals.
Progress is often slow in shoring up these problems and security experts are concerned that the found vulnerabilities aren’t being taken seriously by some manufacturers. With the exception of CareFusion, which Rios and Ahmadi reported took great leadership in responding to their research with swift action, each of the discoveries above were met by manufacturers with resistance, denials of the research evidence and threats from lawyers. It’s a high-conflict arena and one which is starting to come to a head, as evidenced by a high profile event this summer. In that case, researchers with medical device security firm MedSec Holdings Ltd put the screws to pacemaker manufacturer St. Jude Medical Inc in a highly unusual gambit they said was designed to get St. Jude to start to take disclosures by the security community more seriously. Rather than disclose a new problem it found in St. Jude’s Merlin@home devices to the company itself, MedSec took it to investment firm Muddy Waters Capital in the hopes that financial motivations would prod St. Jude’s to more quickly act on fixing security problems in its devices. 9 In response, Muddy Waters took a short sale position on St. Jude and released an investment report advising investors sell St. Jude stock due to the vulnerabilities.
Common Weaknesses
It’s clear that there’s a problem, but for those trying to understand the underlying issues it is important to look at what exactly it is that these security researchers are digging up. It can sometimes be difficult to get a clear view of the issues because of the depth and breadth of flaws found across different devices. But generally they fall into a handful of common categories, including coding defects, software design flaws, absence of tamper proofing, third-party vulnerabilities and network misconfigurations.
Coding Defects
These are deficiencies in how the software or firmware is coded that can allow attackers to create error situations that would help them put together an attack.
For example, in the infusion pump example above, the vulnerability that made the attack scenario possible was a buffer overflow vulnerability. This kind of flaw is created when the underlying code allows for the software to try to store more data in temporary memory storage area “buffers” than they’re meant to hold, which creates a situation where the extra information overflows into other buffers, effectively corrupting or overwriting valid data in those buffers. An attacker can take advantage of this weakness by including malicious code within that “extra” data that would tell the system to perform an entirely new set of commands.
Buffer overflow vulnerabilities are only one of many kinds of coding defects that crop up in medical device software. Other common defects include SQL injection vulnerabilities that allow attackers to type snippets of code into input fields to break the system in a way that causes system malfunction or data leakage, cryptographic flaws such as using broken cryptographic algorithms or improperly validating certificates, and cross-site scripting (XSS) problems that open up the possibility of inserting scripts into the application that can bypass security controls.
Software Design Deficiencies
Software misconfigurations and poorly conceived software design can also make for big medical device vulnerabilities. A prime example of that is the sloppy implementation of password and authentication functionality that makes it easy to circumvent login protections.
For example, one analysis of MDLink software created for automated external defibrillators found that the application was storing password files on the local hard drive in such a way that it was simple to delete all of the password profiles and circumvent their protection entirely. 10
The security research community has been particularly concerned about the prevalence of hard-coded passwords across a range of medical devices. These are essentially universal “back-door” passwords that are built into the code of the software for the manufacturer’s convenience to reset the device in maintenance and administrative situations. In many cases, the software is programmed in such a way that these superpower credentials can never be changed or removed. And what’s more, information about them can be found in readily-available operational manuals about the device. So all it would take would be for an attacker to get their hands on the manual to find a way to easily side step any kind of access control for the device. It’s an extremely prevalent practice in the medical device world.
In 1 example, researchers Rios and Terry McCorckle found in 2013 a crop of 300 medical devices across 40 vendors afflicted with this vulnerability, leaving hardly a speedbump in the way of attackers bent on changing settings or modifying firmware on these devices. These included surgical and anesthesia devices, ventilators, drug infusion pumps, external defibrillators, patient monitors and laboratory, and analysis equipment. 11
Similarly, default and weak passwords littered about firmware and network settings make it trivial to bypass access controls where they’re desperately needed. Scott Erven, a security researcher well known for his prolific discoveries of medical device vulnerabilities, relates one such example from a presentation on some exhaustive studies he made on medical device assets at Essentia Health, a midwestern chain of medical facilities, when he worked there as head of information security.
“We found a couple of defibrillator vendors that use a Bluetooth stack for writing configurations and doing test shocks [against the patient] when they’re implanted or after surgery,” said Erven, who now works as security consultant and regularly conducts independent research on medical devices. “They have default and weak passwords to the Bluetooth stack so you can connect to the devices. It’s a simple password like an iPhone PIN that you could guess very quickly.” 12
Another common and potentially deadly software design flaw is the absence of encryption for data at rest within in-device storage or in transit across communication protocols such as wireless connections. In the seminal proof-of-concept academic research performed by Kevin Fu and his team of collaborating researchers in 2008—a study that was the precursor for Barnaby Jack’s pacemaker hacking demonstration—it highlighted the extreme dangers posed by pacemakers that transmitted wireless signals with no encryption. 13 This is a long-established trend for medical devices and one that only grows as more devices go wireless.
“Medical devices have adopted wireless technology to facilitate communication with programmer devices that issue commands to adjust treatment or retrieve sensor data.
“While wireless communication has made interaction with medical devices both easier and safer for doctors (for implanted devices, needles previously had to be inserted into patients to carry signals), it has also introduced new security risks,” wrote Favyen Bastani and Tiffany Tang of MIT. “A majority of medical devices implement little to no command authorization and encryption schemes, meaning that malicious attackers can remotely extract sensitive health information from medical devices, or even take control of the device to issue possibly fatal commands.” 14
The most recent MedSec report on St. Jude’s vulnerability lacked a lot of technical details, but specifically mentioned that one of the big vulnerabilities the organization found was the use of unauthenticated and unencrypted RF protocols, so even 8 years after Fu’s research and 4 years since Jack first stepped onto the stage, pacemakers continue to put patients at risk through unencrypted wireless communication.
Absence of Tamper Proofing
The MedSec report was also illuminating because it highlighted yet another subset of very common design and coding flaws that as a collective whole are dangerous enough to warrant callout on their own. Namely, that is the absence of tamper-proofing measures across the hardware, firmware, and software stack of a device. Protecting devices from tampering and simple reverse engineering is necessary to make it more difficult for attackers to achieve “root” or total control over the device. Manufacturers usually need to take a number of standard steps to prevent tampering of devices, including:
protecting the identity of hardware, such as by obfuscating chip models and other information that could give attackers clues when probing devices
encrypting the binary of proprietary and off-the-shelf applications so that they cannot be easily extractable by an attacker seeking to reverse engineer the device and hunt for exploitable vulnerabilities in the code
disabling debugging and development mode mechanisms on devices in production so that attackers can’t make arbitrary changes to the code once the devices are out on the market
protecting the APIs (application programming interfaces) that connect end point devices to the back-end servers because hackers can easily exploit these and get to the source code
According to MedSec, in the case of the Merlin@home devices it examined none of these practices were in effect, such that these devices violate security standards in ways that “defy logic.”
Third-Party Vulnerabilities
Not only do vulnerabilities in the software developed by device manufacturers put devices at risk, but so too do those within third-party software used by the device. This includes integrated web applications, firmware and operating system platforms.
For example, many hospital medical devices are built with custom software and hardware running on top of commoditized Windows-based PCs. Often underlying weaknesses found in the Windows operating system can cause glaring flaws in the entire device that put patient safety and privacy at risk. And, unfortunately, many of the devices depend on older, unpatched, and vulnerable versions of Windows that can’t be updated or easily protected by the user lest they risk disrupting the functionality of the device itself.
One CIO of a medical center in Dallas, Aaron Miri, recently related a story about how this can put facility administrators in a bind.
“In my previous life, I had three brand-new medicine-dispensing machines shipped to me, brand new, still in the shrink-wrap,” he said in an April 2016 interview for MedPage Today. “We put them into a brand new unit we had just built. We turned them on. We plugged them in the network. Immediately, my systems started going haywire. Sure enough, these things came infected from the factory with malware, because their underlying operating system was Windows XP. This was just a year and a half ago.” 15
In a presentation last year that examined the network of a very large US health care organization made up of over 12 000 workers, security researcher Mark Collao teamed up with Erven. Collao noted that dependence on Windows XP—which is no longer supported by Microsoft—is all too common.
“[Medical devices] are all running Windows XP or XP service pack two . . . and probably don’t have antivirus because they are critical systems,” Collao said. The research he and Erven presented found that the organization had more than 68 000 medical systems exposed to online attack due to a range of vulnerabilities. Among them included 21 anesthesia, 488 cardiology, 67 nuclear medical, and 133 infusion systems, as well as 31 pacemakers, 97 MRI scanners, and 323 picture archiving and communications machines. 16
Network and Communication Misconfigurations
Devices running on old versions of Windows no longer supported by Microsoft clobber medical groups with a double whammy because not only is the device itself vulnerable, but it also usually serves as a foothold into the network that makes it easy for attackers to target other devices.
In a July 2015 report, security researchers with a threat detection vendor called TrapX detailed how attackers are targeting Windows-based medical devices to create back doors to act as “key pivot points for attackers within healthcare networks.” 17
That’s consistent with Collao and Erven’s research, which in addition to analyzing a the aforementioned large medical group also set up a honeypot that mimicked real life MRI and defibrillators exposed to the Internet to monitor what would happen. In a 6-month timeframe, they found that the devices attracted well over 55 000 successful unauthorized logins and absorbed almost 300 malware payloads. It shows that the threat to these devices is very real and that attackers are already targeting these devices.
“These devices are getting owned repeatedly now that more hospitals are WiFi-enabled and no longer support arcane protocols,” Collao said.
Examining Root Causes
Clearly, there is a long, tough road ahead of the health care industry to address the breadth and scale of security issues troubling medical devices. There’s no silver bullet to the issue because the root causes behind them are complex and numerous.
Many of them are tied to the fact that the medical device manufacturing mindset and culture was established well before the IoT ever sprang into being. A large number of devices were first conceived in a world where they were rarely hooked into an internal hospital network, let alone onto an Internet connection.
As such, programmers working on these devices rarely worried about being vigilant in rooting out the kinds of vulnerabilities that could be taken advantage of by remote attackers. Because so many kinds of devices are developed in a highly iterative fashion, with new versions built upon the software bones of older ones, that leaves a huge body of legacy code riddled with forgotten vulnerabilities and software misconfigurations, often presented in shiny new devices. But even newly developed devices are plagued by software flaws, because even though the risks have now manifested themselves, many manufacturers are not in the practice or habit of keeping security top-of-mind throughout the development process.
What’s more, the speed of innovation and competition for new improvements has them focusing more on rushing products through production rather than taking the time to build security into the process. And even for those manufacturers who have identified the problem and are starting to improve their security practices, this kind of shift takes time.
Regardless, action needs to be taken for the sake of patients and medical facilities alike. Some of the earliest and most important steps that need to be taken include: Building out a secure development lifecycle. Manufacturers who establish secure coding practices that include comprehensive security testing during development and post-market vulnerability management programs can go a long way toward making a dent in this problem. Tamper proofing devices. Particularly important for personal medical devices readily available on the market, tamper proofing is no longer a nice-to-have security feature. As MedSec emphasized in its Muddy Waters report, the practice of protecting binary code, hiding hardware information and protecting from debugging activities should be standard security measures. Focusing on encryption. Encryption of sensitive data and communications is the most obvious of low-hanging fruit for improving medical device security. But poorly executed encryption is almost as bad as none at all, so it will take careful implementation of solid key management encryption protocols, solid password storage practices, and use of the most updated cryptographic algorithms to get the most out of encryption deployments. Cryptographic key protection. The protection of cryptographic keys bears particular emphasis when thinking about encryption. Even the most up-to-date encryption algorithms are rendered moot if the attackers find a way to access the legitimate cryptographic keys that control the deencryption process. It’s critical to protect your cryptographic keys as hackers can easily come in and steal the keys to the kingdom. Most applications in the wild have keys that are not properly secured. Correcting tone deafness to security research community. Rather than thanking security researchers when they disclose vulnerabilities to medical device manufacturers, these companies often instead try to sue them to bury the news of their poor security implementations. The problem is that this never addresses the root problem, and the likelihood is high that the next time around it might not be the good guy hackers who find the flaws. Manufacturers would be best served to establish not only official security disclosure procedures but even consider developing bug bounty programs whereupon they pay independent researchers to bring them valuable details about critical vulnerabilities they find in the manufacturers’ devices. Improve postmarket software updates and configuration control. Finally, medical facilities need manufacturers to untie their hands when it comes to instituting security best practices on machines that will go on their networks or in patients’ bodies. This means creating more rational software update procedures for devices and enabling users greater control over how devices are configured. The days of unchangeable hard-coded passwords need to end, and fast.
Footnotes
Abbreviations
FDA, Food and Drug Administration; GAO, Government Accountability Office; IoT, Internet of things; XSS, cross-site scripting.
Declaration of Conflicting Interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Mandeep Khera is a full-time employee of Arxan.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was funded by Arxan.
