Hacking a human heart


While the headline may conjure up romantic associations, the real background is more worrying. Researchers have discovered that the stationary transmitter used by a specific type of pacemaker suffers from a vulnerability which may have been exploitable remotely. The manufacturer released a software update to fix the flaw and the FDA released a note to inform patients and medical practitioners to take the required steps to update the software.

Vulnerabilities in medical hardware have existed for a while

There is an ongoing trend in the health care industry towards increasing connectivity. This connectivity promises benefits for patients and medical professionals alike. Instead of coming to a doctor’s office, which in itself can be challenging with some types of ailments, some routine tasks can be performed remotely without setting a foot outside. Monitoring certain physical conditions remotely can save all involved a great deal of time. But as if to spoil the party, security folks keep pointing out that some systems may be insufficiently secured.

One such case started back in August 2016, when security researchers pointed out a vulnerability in one of the products of St. Jude Medical, one of the world’s leading providers of pacemakers, internal defibrillators and other products. Currently the company is in the final stages of acquisition by Abbott Laboratories. The system in question is an internet-enabled transmitter, which is capable of remotely monitoring a patient’s heart as well as checking the status of the pacemaker. Reconfiguration tasks are also possible via this device. Prerequisite: the patient needs to be in physical proximity of the transmitter.

The researchers pointed out that it would be possible for an unauthorized individual to access certain data points on the transmitter, and therefore by extension, the implanted device itself. Furthermore, an unauthorized user could exploit this vulnerability to reconfigure the implanted device, possibly causing it to operate inappropriately. Implanted defibrillators could be configured to administer unnecessary shocks which drain the device’s internal battery faster than usual. Depending on the device's type and configuration, its operation potentially results in major physical discomfort for patients. Draining the battery faster can also cause the device to fail at a moment when it is most needed.

While there are no reported cases in which the affected devices were subjected to attacks, this incident makes it very clear that by now IT security has come to play a major role in the design of health care products and appliances. There is a lot at stake: the reputation of a renowned manufacturer may suffer significant damage if security flaws in their products are reported about repeatedly. Financial interests also go hand in hand with reputation. Should the products of a leading manufacturer become known for shoddy security, then this may have a direct impact on the stock value of the company. Most importantly, the lives of patients are on the line as they rely on the manufacturers’ products to keep them alive in the very literal sense.

What has happened is not a first: back in 2015, a German researcher managed to disable the ventilation function of a network-enabled anesthesia device via network connection. He later pointed out that some of the hardware has a security standard that dates back to the 1990s. Vulnerabilities in medical hardware have been cause for concern for a while; insulin pumps were found to be vulnerable a number of times, which would have allowed attackers to remotely administer a fatal dose of insulin. The same is true for some drug pumps which are frequently used in hospitals. Any piece of modern, network-enabled medical hardware which is introduced today may be up to several years behind in terms of security. The reason is that the certification a product needs to be approved for sale is only valid for the configuration which the manufacturer has submitted for certification. Depending on the nature of the device, any changes to that configuration require re-certification.

Preventing vulnerabilities is not an easy task and patching is difficult

While it would be easy to point a finger at a manufacturer’s shortcomings, one must bear in mind that any piece of hard- or software which is to be used in a medical context needs to undergo rigorous testing and needs to be certified before it can be sold. The criteria are stricter the more vital a system is to the survival of a patient. This certification process can take years and is very costly for manufacturers. Medical hard- and software also has very limited options when it comes to software updates. Oftentimes software updates and security patches for medical devices are few and far between, if they are provided at all.

With the advent of ransomware, a chilling possibility has emerged: that threat actors might one day be able to extort money from patients and health care facilities by threatening to disable life support systems. To address this challenge, manufacturers and security researchers have no option but to maintain a close collaboration and practice responsible disclosure to ensure that no lives are put at risk from software vulnerabilities. This collaboration should not be limited to the devices themselves, however: there are many cases in which hardware is indirectly connected to some type of online platform. Those online platforms are going to be targeted by criminals and might even end up being compromised either by accident or by design. This is true for children’s toys as well as for life-sustaining hardware. For each product that is introduced, a careful evaluation process must be performed to establish whether convenience outweighs the risks of an online connection. In addition, certification processes must be sped up significantly. The pace at which IT security develops has reached a speed at which is lengthy proceedings are inappropriate. It is also of critical importance for patient safety to build security directly into the design, which according to the researchers, the manufacturer has failed to do at various points (See the research paper referenced above).

Security is not any kind of miracle pixie dust you can sprinkle onto your product after it has been programmed.

Paul Vixie, September 2015

Instrumentalization of product security in financial markets

The case in hand has yet another facet that may prove interesting in similar future cases:
the security assessment of the affected products was instigated by a security research company called MedSec as well as Muddy Waters Capital, the latter performing "investment research". In other words, they look at figures, statistics and technologies to test how "waterproof" they are. The advance knowledge gleaned from the analysis was then used with the intention to short-sell the manufacturer's stock. As outlined, by the time the findings were made public, St Jude Medical was in the middle of being acquired for 25 billion dollars. Lucrative advance knowledge about security issues in the product would put anyone in possession of those facts into a position to short-sell any stocks and benefit from a stock price drop after making the findings public.
St. Jude Medical has brought forward charges against both MedSec and Muddy Waters Capital. Litigation is still underway. 

The point is that without proper security built into devices, this precedent could well mean that a new player has entered the game. This player comes from the realm of finance and stock trading. There might be cases in the future very similar to this in which finance teams up with security research to gain a lucrative knowledge gap with the intention of short-selling and them making findings public. On one hand this could be beneficial as potential expensive litigation could serve as an incentive for manufacturers to actually improve security. On the other hand a big ethical dilemma presents itself when companies make a profit off findings which could potentially be life-threatening and that are not disclosed responsibly to affected manufacturers.
Bug Bounties are an incentive for security researchers to responsibly disclose security flaws. Depending on the severity of a vulnerability, a single bug report can fetch six-figure bug bounties. If it turns out that the approach MWC and MedSec are currently taking is more profitable (pending the outcome of the lawsuit), this might have an impact on the way security research is used in the future.