It seems obvious, but building software to manage health information has a huge risk factor. If information is lost, incorrectly recorded, it can have terrible real world effects.
In 2015, an elderly man went to his GP with ankle pain (stuff.co.nz, 2015). He was prescribed the painkiller Voltaren, and advised to return in a months time.
However, this patient was allergic to this medication. His medical record did show this; it noted to avoid the medication as it previously caused problems with his renal function.
Usually, the GP's software would show an alert if he tried to prescribe a conflicting medication. However, their practice was merging with another medical centre at the time, which was possibly causing computer difficulties. The GP stated he never saw any alert or warning.
The patient returned in a month, with pain in the joints of his right foot. He was diagnosed with probable gout, and advised to keep taking Voltaren.
Two days later he was admitted to hospital and diagnosed with renal failure. He passed away.
So who's at blame? The commissioner said the GP failed to provide services with reasonable care and skill. He was also critical that the practice didn't ensure its systems were fully functioning while the practices were merging.
However, the fact is still that the software failed, and could've prevented this death.
This tragic story shows the challenge of managing risk in healthcare, and the many ways failure can occur. Failure can include:
Software development has spent a lot of effort mitigating the first two. In critical areas such as banking or large scale data centres, software is tested rigorously, and designed to deal with hardware failure.
Human action can be accidental, such as a software developer accidentally introducing a bug in a software patch, or may be malicious. I interviewed Micheal, the founder of a healthcare software company - and he told me a scary story. Their developer locked them out of their servers, while the service was running and demanded more payment - putting a population of patients at risk. Legal action was taken, but initially they had to pay the money to continue access. In his experience, he found that human action was one of their biggest risks. The other largest risk for them was a financial failure - without a company to support their software & service, it wouldn't be able to function.
Institutions are also a potential target of attack. A Hollywood hospital was hit with a ransomware attack (Winton, 2016) - this is a malicious piece of software which encrypts all of their computer's files. The institution decided the fastest way to restore system functionality was to pay the $17k ransom (paid in the form of bitcoins), which they did.
They were forced to return to pen and paper while the system was down. This is not all uncommon - Winton reports that according to federal records, between 2010 & 2016, at least 158 medical institutions have reported being hacked or having issues that compromised patient records in the US.
Two months before this thesis was published, in May 2017, UK hospitals were hit with another ransomware attack (Brandom, 2017). This attack caused 16 hospitals to have to be shut down. The virus was not specifically designed for hospitals but spread rapidly there due to the use of out of date IT systems.
Developing software for healthcare inherently includes more risk than many other types of software. Loosing medical information can be disastrous, loosing a high score on a mobile game - not as impactful.
Risk must be considered and managed in every development decision - to protect both institutions, patients, and the developers. However, I do not believe risk should be a reason to impede innovation in healthcare software - discussed more in the next section.