Security and technology go hand in hand. To understand the security risks within the IoMT, it really helps to understand the technology. Because this book is aimed at a general audience and not a technical one, we are only going to go deep enough to explain what the general issues are from a security standpoint. Please keep in mind much of what is discussed here is not true across the entire spectrum of medical devices. Each piece of technology has its own specific applicability.
Part of the reason it is so important to define what medical devices are is that it helps to determine what kinds of technology are built into the devices. Just as there is no one-size-fits-all in the medical device world, that is true for IoMT as well. Any generalizations made here are not meant to be applicable across the whole spectrum of IoMTs.
Electronic Boards
In a traditional computer there exists something called a motherboard. Without going into too many details, it is the central circuitry that connects different aspects of the system such as memory, the processor(s), input and output functions, the hard drive, monitor, and so on. What is important to consider from an IoMT perspective is that for some medical devices, many of the aspects of a traditional computer are baked into the motherboard. The end result is that memory, processing power, communications, and so on are severely limited. Often the “operating system” (firmware for the technophiles) is extremely limited and in some cases is not possible to upgrade or patch.
Operating Systems
Operating systems are another aspect of many connected medical devices. In many cases these are the same operating systems you may have at home. The first chapter covered medical imaging devices running on Windows XP. Another report from 802secure stated that 83% of the systems are running on outdated operating systems—a 56% jump from the previous year as a result of Windows 7 not being supported any longer.17 In some cases these systems can be and are updated, but in other cases, the manufacturer will not support upgrades, making things a bit more challenging in hospitals.
Sometimes the issues of old operating systems are beyond the control of the manufacturers. The FDA can take as much as 5 to 6 years to approve a particular device.18 Many companies are on a 3-year cycle for upgrading hardware. Imagine the quandary device manufacturers are in. Quite often the devices are released to the public with known vulnerabilities. The manufacturers can't change operating systems mid-stream. To make matters worse, many medical devices have a 15- to 20-year life span.19 As of this writing, Windows XP has 741 known vulnerabilities20—many of which cannot be patched because it is not supported. This is a huge challenge because sometimes the hardware cannot support new operating systems. The fact that many IoMT systems are kept for 15 to 20 years creates other challenges. It is simply impossible for manufacturers, developers, and operating systems to keep patching systems for 15 plus years. Inadvertently, it becomes an intersection where all of these problems create an environment that is essentially a hacker's paradise.
Software Development
Secure software development is arguably one of the more important controls in information security—especially if data is accessed through that software. If you have a poorly written application, it can mean the difference between securing the data and not securing the data. The challenge with software development is that there can be ten ways, all legitimate, of accomplishing the same control. In many other parts of information technology, there is a button to press and you are done. From a historical perspective, there are some cultural challenges.
Typically, developers use something called the software development lifecycle (SDLC). The SDLC includes methods of eliminating the various problems. These include peer review, unit testing, line testing, and a host of other techniques. What is missing from the SDLC processes of less mature organizations (from a security standpoint) is security. Depending on when and where you went to school, security may or may not have been a consideration. Oftentimes, it is up to organizations to train developers about security.
Training developers on security can be a little like herding cats. Doing it right means you have to have several things in place. First, you need a set of guidelines and security standards for the team to follow. Just right-sizing the amount of information to provide the developers can be a daunting task for organizations. Providing too much information means it will not be retained right away. Not providing enough means other challenges for the organizations.
Another aspect of a good coding environment is tooling. There are fantastic tools on the market that can detect problems before the product goes into production, and using them in the right way is one way to reduce the risks to the organization. However, the tools do have blind spots when it comes to human logic flaws. It is not something these kinds of applications are good at, and thus penetration tests are critical to the final product being secure. A penetration test is a process where both vulnerabilities and human logic flaws are discovered. While tools are used, there is a human aspect to the assessment process.
Sadly, not all companies have the time or resources to have a mature secure SDLC that includes all of the right tools being used in the right way. In cybersecurity there is a phrase called “Separation of Duties” (SoD). In this specific context, it is important to have some SoD between the software developers and the cybersecurity team signing off that the software is ready for production. If a software developer checks the software, they may or may not adhere to the requirements depending on the person or context. They may or may not ensure that what goes onto the market does meet FDA security requirements.
Software development has shifted considerably over the years. What is common now is an iterative approach to software development known as scrum. Many years ago, software was only created in versions. The first version was 1.0. Bug fixes could take it to 1.01. while minor revisions could take the software to 1.1. Larger revisions would go to 2.0. While versioning is still common and highly practiced, when it comes to the cloud aspects of development, an iterative model is much more common. Iterative means the product continually improves as part of the Software-as-a-Service. It is also a good business practice to ensure you continually evolve to meet the client's needs. The challenge here is for that development to be continually secure—assuming the software was secure to begin with. It takes time to perform the aforementioned penetration tests. Given that updates can now occur multiple times a day, it is a challenge for security to keep up.
Wireless
There are multiple types of wireless connections for medical devices. The full range of wireless connectivity includes Wi-Fi, near-field communications (NFC), cellular, Bluetooth, and occasionally RFID. All have their strengths and weaknesses—especially when you consider the potential 20-year life span.
Wi-Fi is particularly attractive for many of the remote monitoring capabilities built into connected medical devices. There is an easy bridge to the internet, which means the system can be monitored in the cloud (more on the cloud in a while). From there, hospitals, doctors, and patients can be alerted in a moment's notice if there are any issues. As a result of COVID-19, the wireless technologies are gaining in popularity—especially as they relate to telemedicine. They are also important for some hospitals that have rooms that block cellular service (as a byproduct of blocking other systems).
Over Wi-Fi's comparatively long history there have been a great number of improvements, not only from the functional standpoint, but also with regard to security. The precursor to Wi-Fi, Wavenet, was created back in 1991—just after the dawn of the internet. Wireless signals were not encrypted back then. It would not be until 1997 that Wire Equivalent Privacy (WEP) was created and included with Wi-Fi devices. It had a 10- or 26-digit key written in hexadecimal, but with modern technology WEP can be hacked in under a second. What