Medical IT crap, the why

(Continuing from my previous post.)

I think the major problem is that buyers specify domain functionality, but not the huge list of “non-functional requirements”. So anyone fulfilling the functional requirements can sell their piece of crap as lowest bidder.

Looking at a modern application, non-functional requirements are stuff like resilience, redundancy, load management, the whole security thing, but also cut-and-paste in a myriad of formats, a number of import and export data formats, ability to quick switch between users, ability to save state and transfer user state from machine to machine, undo/redo, accessibility, error logging and fault management, adaptive user interface layouts, and on and on.

I’d estimate that all these non-functional requirements can easily be the largest part of the design and development of a modern application, but since medical apps are, apparantly, never specified with any of that, they’re artificially cheap, and, not to mince words, a huge pile of stinking crap.

It’s really easy to write an app that does one thing, but it’s much harder and more expensive to write an app that actually works in real environments and in conjunction with other applications. So, this is on the purchasers’ heads. Mainly.

A day in the life of “medical IT security”

This article is an excellent description of some of the serious problems related to IT security in healthcare.

Even though medical staff actively circumvent “security” in a myriad inventive ways, it’s pretty clear that 99% of the blame lies with IT staff and vendors being completely out of touch with the actual institutional mission. To be able to create working and useable systems, you *must* understand and be part of the medical work. So far, I’ve met very few technologists even remotely interested in learning more about the profession they’re ostensibly meant to be serving. It boggles the mind, but not in a good way.

Some quotes:

“Unfortunately, all too often, with these tools, clinicians cannot do their job—and the medical mission trumps the security mission.”

“During a 14-hour day, the clinician estimated he spent almost 1.5 hours merely logging in.”

“…where clinicians view cyber security as an annoyance rather than as an essential part of patient safety and organizational mission.”

“A nurse reports that one hospital’s EMR prevented users from logging in if they were already logged in somewhere else, although it would not meaningfully identify where the offending session was.” 

This one, I’ve personally experienced when visiting another clinic. Time and time again. You then have to call back to the office and ask someone to reboot or even unplug the office computer, since it’s locked to my account and noone at the office is trusted with an admin password… Yes, I could have logged out before leaving, assuming I even knew I was going to be called elsewhere then. Yes, I could log out every time I left the office, but logging in took 5-10 minutes. So screen lock was the only viable solution.

“Many workarounds occur because the health IT itself can undermine the central mission of the clinician: serving patients.”

“As in other domains, clinicians would also create shadow systems operating in parallel to the health IT.”

Over here, patients are given full access to medical records over the ‘net, which leads physicians to write down less in the records. Think this through to its logical conclusion…

Somewhat dumb credit card region lock

Visa has a neat feature where you can determine in which regions the card can be used. In my case, it’s “internet”, “Sweden”, “Nordic countries”, “Europe”, “North and central America”, “South America”, “Africa”, “Asia”, “Oceania”. You can set these through the credit card app (mine is from Volvo, of course).

So I disabled all regions except “Internet” and “Sweden”, planning on enabling other regions when I travel. 

Today I got a message from Netflix that they couldn’t charge my card. No explanation why. I called the card issuer and after some digging they explained to me that since I disabled “Europe”, Netflix got refused. Turns out that Netflix charges from region “Europe”, not “Internet”. More specifically from The Netherlands. Once I reenabled “Europe”, the charge went through.

Now, there are several problems with this. First of all, an internet based service like Netflix should be in the region “Internet”. Secondly, if it isn’t in “Internet”, they should at the very least tell us from which region they charge. I had no idea Netflix charges from The Netherlands. How could I? It’s not reasonable to expect us to check with the card issuer every time this happens, and have them go dig through logs (took them 10 minutes to find, so it wasn’t trivial).

Worst of all, this kind of thing implies that you’d better open up a lot of regions you’re not travelling to, since you don’t know from which regions different internet based companies do their charging.

Having the card processor issue meaningful error messages, not just “sorry we failed”, would definitely help a lot, too.

Horrible little law

Feinstein-Burr senate bill, it’s getting crazier by the day:

No, this slippery little act says that when a company or person gets a court order asking for encrypted emails or files to be handed over and decrypted, compliance is the law.

How compliance actually happens isn’t specified. They don’t care how user security was broken (or if it were nonexistent), and the senators are making it clear that from now on, this isn’t their problem.

Enemy number one

The US gov is quickly turning into corporate threat number one:

Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter. 

If this is really the case, if the US govt is tapping servers like this at any significant scale, then having Apple implementing encryption end-to-end in most of their products must mean that the govt is losing a hell of a lot more data catches than just the data they could get with a warrant. 

The ability to recover data with a warrant is then just a marginal thing. The real problem is that their illegal taps stop working. Which means that the FBI case is a sham on a deeper level than it appears. The real panic is then about the server compromises failing. 

And, of course, the end-to-end encryption with no keys server-side is also the solution for Apple. Implants in the servers then have relatively little impact, at least on their customers. The server-to-client communications (SSL) would be compromised, but not the content of the messages inside.

If the govt loses this battle, which I’m pretty sure they will, the next frontier must be the client devices. Not just targeted client devices, which can already be compromised in hardware and software, but we’re talking massive compromises of *all* devices. Having modifications in the chips and firmware of every device coming off the production lines. Anything less than this would mean “going dark” as seen from the pathological viewpoint of the government.

Interestingly, Apple has always tended to try to own their primary technologies, for all kinds of reasons. This is one reason more. As they’re practically the only company in a position to achieve that, to own their designs, their foundries, their assembly lines, with the right technology they could become the only trustworthy vendor of client devices in the world. No, they don’t own their foundries or assembly lines yet, but they could.

If this threat becomes real, or maybe is real already, a whole new set of technologies are needed to verify the integrity of designs, chips, boards, packaging, and software. That in itself will change the market significantly.

The opportunity of taking the high road to protect their customers against all evildoers, including their own governments, *and* finding themselves in almost a monopoly situation when it comes to privacy at the same time, is breathtaking. So breathtaking, in fact, that it would, in extremis, make a move of the whole corporation out of the US to some island somewhere not seem so farfetched at all. Almost reasonable, in fact.

Apple could become the first corporate state. They would need an army, though.

As a PS… maybe someone could calculate the cost to the USA of all this happening? 

Even the briefest of cost/benefit calculations as seen from the government’s viewpoint leads one to the conclusion that the leadership of Apple is the most vulnerable target. There is now every incentive for the government to have them replaced by more government-friendly people.

I can think of: smear campaigns, “accidents”, and even buying up of a majority share in Apple through strawmen and have another board elected.

Number one, defending against smear campaigns, could partly explain the proactive “coming out” of Tim Cook.

After having come to the conclusion that the US govt has a definite interest in decapitating Apple, one has to realize this will only work if the culture of resistance to the government is limited to the very top. If eliminating Tim Cook would lead to an organisation more amenable to the wishes of the government.

From this, it’s easy to see that Apple needs to ensure that this culture of resistance, this culture of fighting for privacy, is pervasive in the organisation. Only if they can make that happen, and make it clear to outsiders that it is pervasive, only then will it become unlikely that the government will try, one way or the other, to get Tim Cook replaced.

Interestingly, only the last week, a number of important but unnamed engineers at Apple have talked to news organisations, telling them that they’d rather quit than help enforce any court orders against Apple in this dispute. This coordinated leak makes a lot more sense to me now. It’s a message that makes clear that replacing Tim Cook, or even the whole executive gang, may not get the govt what it wants, anyway.

I’m sure Apple is internally making as sure as it possibly can that the leadership cadre is all on the same page. And that the government gets to realize that before they do something stupid (again).

Protonmail

Protonmail, a secure mail system, is now up and running for public use. I’ve just opened an account and it looks just like any other webmail to the user. Assuming everything is correctly implemented as they describe, it will ensure your email contents are encrypted end-to-end. It will also make traffic analysis of metadata much more difficult. In particular, at least when they have enough users, it will be difficult for someone monitoring the external traffic to infer who is talking to whom and build social graphs from that.  Not impossible, mind you, but much more difficult.

If you want to really hide who you’re talking to, use the Tor Browser to sign up and don’t enter a real “recovery email” address (it’s optional), and then never connect to Protonmail except through Tor. Not even once. Also, never tell anyone what your Protonmail address is over any communication medium that can be linked to you, never even once. Which, of course, makes it really hard to tell others how to find you. So even though Protonmail solves the key distribution problem, you now have an address distribution problem in its place.

But even if you don’t go the whole way and meticulously hide your identity through Tor, it’s still a very large step forwards in privacy.

And last, but certainly not least, it’s not a US or UK based business. It’s Swiss.