The flip side of TDD

There is a problem with Test Driven Development (TDD) and security. Even though I’m a severe proponent of TDD and do my own development (largely) that way, I notice a strong conflict between good architecture and TDD. I’ve also seen mention of this effect in the journals lately, so I’m not alone in this.

What happens is that TDD promotes doing early and minimal implementation, then iterate over it until you get everything to work. Fine, everyone loves that. But early “ready-to-run” code usually implies a simplistic architecture. Not necessarily, but usually, please note.

Now, you start out writing all these tests, ostensibly free from architecture and design assumptions, only specifying the actual requirements. But you aren’t as free from assumptions as you’d like, since just by writing the test in a particular place, you’ve already made an architectural decision. Once the tests are in place and your code runs fine, you’re very free to refactor and improve your code safely, in a kind of localized way, class by class, method by method. But as soon as you do serious changes of architecture, your fine unit tests are usually blown away and have to be refactored or even rewritten. That hurts, and humans try to avoid things that hurt.

After a few incidences like that, you get gun shy and tend to not change your architecture unless you really and truly have to. And there are very few instances where you really have to do that just to make your system work (which is the only criterion your stakeholders care about). So, the architecture of TDD developed systems tend to be monolithic or at least simplistic and kinda smacked together, and guess what… there’s nothing more important than good architecture for secure systems. Forget about buffer overruns and unsafe APIs. It’s the architecture that makes your system fragile or resilient. The rest is just dust and filling. You can make systems run fine and bug free without a solid architecture, but you can never make them really robust.

Personally, I refactor my architecture anyway, over and over, but the only incentive I have is because I’m compulsive obsessive and people tend to not appreciate it (until years later, that is). Every other external incentive there is tells me not to do it. Timeboxing also increases the pressure to leave the architecture unchanged.

So I think we’re in the process of discovering the Achilles heel of TDD: even if the code is great, it much too easily leads to a poor and insecure architecture, and I think we need to take that seriously and try to come up with answers to fix this problem.

And, no, BDUF isn’t the answer.

Story time

After having read about the goddam awful handling of a student that hacked a university system, I’d felt that a little story could help tip these people off on how to handle students without necessarily breaking them and destroying their future.

Story Time

Kid sits in his dorm room, bored out of his skull trying to find any excuse to not cram for tomorrow’s exam. Starts fiddling with the student registration system (or whatever), finds a glaring hole, pulls up a mate’s records for kicks and prints them out. Writes a little email note to the IT admin, going something like:

“Hey, your SRS sucks. I can tell you that anyone can see anyone else’s data without even breaking a sweat. I can prove it if you like. Please fix.”

Reply from IT admin: “Kid, whatever you’re doing, stop it right now and get your ass down here to my office. Together, we’ll see if you’re right and if you are, we’ll do something about it. How’s that? We could have lunch afterwards, but on one condition: don’t touch it again in the meanwhile. Are we agreed?”

Kid: “Hey, Mr Simpson, sure thing! I’ll be there, no fail! And I promise not to touch a thing until then. What’s for lunch, btw?”

Next day.

Mr Simpson: “Ok, Kid, show me what you’ve got. Ummm…. ok, yes, that’s bad. Let’s see what we can do. I’ll try to find a fix for this, and I’ll get back to you when I’m done so we could go over this again together. Give me a week, and if you don’t hear from me, remind me, ok?”

Kid: “Yes, sir! I’d be glad to help.”

Mr Simpson: “One more thing, kid. You already saw some information you’re not supposed to see. You have to promise me to destroy it and forget it. On your mother’s head. Will you?”

Kid: “What info? I’ve already forgotten.”

Mr Simpson: “That’s my boy. The second thing is that you actually went too far and I’m going to turn a blind eye to that. The next time you suspect something’s amiss, you come to me first, and we’ll hack the system together. I can do that without having the SWAT team circle the building, but you can’t. You were lucky this time, but who knows about next time, right?”

Kid: “Yes, Mr Simpson, I think you’re right.” A bit of cold sweat enters into the picture.

A couple of days pass. Mr Simpson asks for Kid to come down to the IT office again.

Mr Simpson: “Can we go through what I did to the system and see if you see anything wrong with it? But you have to promise (or sign an NDA or whatever) that you’ll keep whatever you see to yourself. Ok?”

They go through what has been fixed and what has not. Then Mr Simpson delivers an exit sermon:

“Kid, this time you were lucky. You did actually trespass into the systems. Yes, I know you meant well, but this is really dangerous. Not so much to the system, it’s crap anyway, but to your future. Places like this university is full of mean, lazy, bozos that would much rather call the cops on you than listen to what you have to say. So, this is my advice to you in the future, in and out of university: if you see some potential security problem with a system, stop exploring it as soon as you have a decent suspicion, long before you have proof. Contact whoever is in charge of the system and if they’re cooperative, do as we just did. If they’re not, view them as a direct threat to your career, don’t touch another thing, don’t make yourself a suspect in the breaks of that system that will inevitably occur. Just step away quietly and save yourself for another battle. Enjoy the show from a distance when that system goes under.”

Kid: “But I didn’t know how to handle it, I was sure you people wouldn’t want to listen. Couldn’t you put up a policy about this somewere?”

Mr Simpson: “You’re a bright lad, Kid, I’ll get right on it.”

And so he did, he formulated a policy that popped up whenever a student accessed the system, and it went something like this:

“If you have concerns regarding the security of this system, please contact Mr Simpson at IT support. Please don’t hack us. Please don’t make us call in the cops. Let us work out these things together, for our sake, for your sake, and for the good name of the university.”

And they lived happily ever after.

PS: Mr Kid went on to become a CIS and had a similar policy introduced in his multinational. He then went on to win the Nobel Peace Prize in 2016. He also became famous for having introduced a new, highly secure, layered and tokenbased database access method that changed database security programming forever.

Another freedom bites the dust

The Swedish parliament just passed a bill that allows the Swedish military to monitor any communications over the net of anyone without a court order. It also allows building up maps of interrelationships using traffic info without any court order. It kind of beats anything the US administration did even at its worst. Except it’s actually a law, so the government here doesn’t need to break the law to do it. How convenient.

It has been said that it was created under pressure from our uncle in the west, since so much former-east-block traffic passes through Sweden. I’m inclined to believe that, but I see no reason why our government can’t decide for themselves, so the responsibility for being pussies is all on the Swedish government.

I can see only one upside to the whole thing: anonymous proxies like Relakks, new methods of hiding traffic information, message encryption, etc, will get a real boost. This is a country of contrarians and inventors, so my hopes are high. Even some regular good citizens start asking me how to make life difficult for the buggers. That’s a very good sign.

I think, or rather hope, that this was a crucial mistake by the “who needs privacy” crowd, creating some real legimate reason to start fighting government initiatives like this. Sweden has no 9/11 to use as an excuse. Sweden has no “boys in Iraq” to support. There is very little unconditional patriotism or flag waving. There’s not even any terrorism here to defend against. IOW, there is very little emotional argument to quiet the crowd with, if the crowd gets upset.

OTOH, to get Swedes visibly upset about anything is pretty hard to do, so we’ll have to wait to see if this particular leather boot does the trick or not.

See http://www.thelocal.se/12514.html (english)

Update: another excellent article about it in The Intelligence Daily.

A call to (telescopic) arms

Medical technology is evolving and one particular area where a lot is happening is in robotic surgery. By moving the surgeon a couple of feet away from the operating table and into a comfy chair, we accomplish a few goals: relaxed surgeon, better view using keyhole techniques, filtering of movements, etc. But it’s only a step on the way to telesurgery and that is where the real benefits reside. Imagine, for instance, to be able to get the best surgeon for a procedure independent of the location, any time of the day or the night. Or to get any surgeon at all, for that matter, to operate at the scene of an accident or in a little village somewhere. All you need is the robot on the spot and a good network connection. And that’s where we run into trouble.

The requirements on the network we need for telesurgery are pretty horrific and no current network, as far as I know, is designed to fulfill any such requirements. The network needs to be absolutely secure, and by that I mean it needs to be very resistant to breakage, to delays, and that it must ensure data integrity at all times. It also needs to protect privacy, of course, but that’s almost an after-thought.

For us security people, the telemedicine networks is a new challenge and one I think we should spend more effort specifying and creating. For instance, we need to find a way to ensure the following characteristics:

Max latency

For instance, we know that turnaround delays above a hundred milliseconds or so make telesurgery very difficult and dangerous.

Redundancy and resilience

Obviously, we don’t want the network to go AWOL during an operation. And if it does, we need to fail safe. Both the surgical instruments and the procedure as such need to fail in a safe manner.

Integrity

The data integrity is of utmost importance. When we want a 3 mm incision, we don’t want that to turn into 3 meters by accident.

Authentication

We want to make sure only the right surgeon is on the line.

Discussion

The above is just a few issues I could think of right off the bat. Internet protocol, for instance, is well suited to the resilience requirement, but its lack of guaranteed time of delivery is a problem. I do think we need a separate network that has the desired functionality and characteristics, and that may in part be based on current protocols and infrastructure. I do think, however, that the problem hasn’t yet been attacked on a holistic level. I’m also sure that the current Internet structure will not suffice to carry telemedicine applications. In other words, it’s time we looked over these requirements and started coming up with real solutions, else the next step in the evolution of medicine will not get started.

Parental controls done right

Just gave my iMac to my 6 year old daughter, so I got a chance to explore the parental controls in Leopard. Which is why I gave her this machine in the first place.

Now, this stuff is done right all the way.Parental controls logo

First, having a limited user account on a Mac is not a problem for anything, which is a major first step. Then I set up her account to have “parental controls”, which is just a checkbox to click. Then I used system preferences on *my* Mac Pro, went to Parental controls, saw her machine with her account listed and logged in twice, once as admin on my machine, once as admin on hers.

After that, I can select, one by one, the apps she can use. For Safari, I can enter the websites she can access (I can approve new sites on the fly on her machine using my password, for that once or permanently). For iChat I can set which users she can chat with, except I simply disabled it for now. For Mail, I can set which email addresses she can write to and receive from.

Selecting allowed apps

Interestingly, if she receives mail from an unapproved mail address, it’s redirected to my account and my Mac Mail shows me the email and asks me if this source is allowed to write to her. If I approve, inside Mac Mail, the address is added to her list of approved emails and she gets the mail in the next round. Same thing if she writes a mail to an unapproved address, she gets a popup saying it’s not approved and gives her the choice of asking for permission. If she does, I get the mail and again get the chance to approve it or decline.

I can also set how many hours per day she can use the machine. (Buggy, see update below.) One setting for weekdays, another for weekends, and there is even a setting for excluded hours, for instance after bedtime.

Hours per day setting

Lastly, a very complete logging. Every app she used, number of times and for how long, plus date and time. Every website visited, including the URL parameters, so I can, from my own machine, see exactly *which* videos she watched on YouTube, for instance.

Log of URLs

Not everything is as totally controlled, for instance Skype is an allow/decline kind of thing. I can’t lock down who she adds, but I sure can check visually every now and then. If I’d excluded Skype and only allowed iChat, I would have had total control, but I can’t expect everyone to go get a Mac. Not just quite yet, anyway.

There’s just one thing missing and that is that I don’t get copies of all her email. But OTOH, that would be too intrusive, I think, especially since she can’t receive or send to anyone I haven’t already approved.

I’m amazed at how simple and well done this is in Leopard. Not really a surprise, but still. To me it’s worth giving her the iMac, just for this one thing.

Update on Feb 13, 2008: the time limits on use seem to only work intermittently and are thus unreliable. Depending on use they may not kick in. To my daughter’s delight, she seems able to keep watching YouTube videos forever, regardless of settings.

Free movie, right…

Don’t click on stuff like this, when it pops up in your browser:

Fake movie ActiveX download

The givaway in my case is that I run Mac OSX and that dialog box looks distinctly Windows XP. And ActiveX is kinda the wrong thing to offer to me. But there are other signs, too. Be warned about this, since clicking installs a trojan on your machine (at least if you’re running Windows).

If they had adapted the screen for OSX browsers to look like an OSX dialog box, I would have been easier to sucker into this, so there’s no reason to relax too much.

This particular site is “webmovies-a.com”. Don’t go there.

And your point is…?

Place: Windows XP SP2. Event: I’m opening “My Computer”, selecting a mapped folder on my in-house NAS, rightclick one of the backup files I have there, intending to copy it, and I get this popup:

Strange windows popup

Now, it’s a backup file, not “a page”. I’m not even surfing the net here. How can the file  have “an unspecified security flaw”, and what am I supposed to do about that?

Yes, I’m back into Windows again, having got a development gig based on Windows. So I’m sure you’ll get more whining from me about things like this in the near future.

The Dan Egerstad affair

Thinking about the Dan Egerstad affair and reading the comments to the Wired article, it may very well be that he set up a Tor node and simply caught credentials when people routed unsecured POP connections through Tor and exited through his node. This makes a lot of sense. I can very well imagine embassy people, and others, using Tor because it’s a great security product, while completely misunderstanding what it’s for.

Maybe that explains why Dan is so cool about getting sued for this. Assume he set up a Tor node for non-malicious reasons, then added a sniffer to it to make sure it wasn’t being used illegally, like for child porn (the sniffer would only give useful info for those sessions that used his Tor node as an exit node, of course). So when he then checks the logs, he finds all these POP email credentials in his own logs on his own machine. He goes on to publish these, not actually using them to get into the email boxes? Has he done anything illegal? I don’t think so. People put their credentials on his machine entirely unasked and of their own volition. They even went to the trouble of installing a Tor client to be able to.

Maybe he did set the whole thing up to catch the credentials, but if he sticks to a very plausible story like the above, there’s no provable intent, is there?

I don’t know for sure this is what happened, but if it didn’t, it will.

Note: in the comment to the above article, “anonymouse” writes that it is an MITM attack using false SSL/TLS certs at the Tor exit node, but that would only be necessary if the victim used SSL protected POP connections through Tor and I don’t see why they would. If they were naive enough to think Tor would do anything at all for their email security, I don’t think they would be savvy enough to add SSL to the POP.

Who stole my signature?

It’s high time we got our signatures back. Since IT systems were introduced in healthcare, handwritten signatures have lost all importance, not because they’re superfluous, but because the IT application vendors can’t get a grip on how to implement them. And the weird thing is that all of us, including the authorities, just let this go on with hardly a notice. In fact, we’ve regressed more than a hundred years as far as this issue is concerned and we’re ok with that?

We need digital signatures in our healthcare applications and we need them badly. As things are now, we sign journal entries and prescriptions with just a mouse-click (or ten mouse-clicks in some apps, you know who you are). If you prescribe heavy analgetics or sedatives you need to prescribe on special numbered forms and add your own personalized sticker (in Sweden), but if it’s electronic you just click. Anyone can do that if they find a way to log in as me. Almost anyone can do that if they can get an SQL command line to the database. How am I to defend myself against allegations that I prescribed something bad or entered a stupid note on a patient if this is how the system works? I can’t!

We trust the application and by implications its developers. The developers trust the OS and the IT department running the app and all this trust is totally misplaced and nothing is verified. The applications regularly misplace notes and change content due to bugs, and still we trust them?

Technically, there’s only one decent solution today and that’s digital signatures based on assymetric crypto systems. It’s not that difficult to implement and we don’t even need a very extensive public key infrastructure (PKI). All we need is the keys and a local certification authority (CA).

The keys have to be created on a USB dongle or a smart card and the private keys should never leave it. The local workstation could do the processing, but once better USB dongles or smart cards are easily available, the processing should be moved to those. That’s all pretty easy since all modern operating system support all this so the applications don’t need to.

It’s also important that the signature is applied to two structures: the machine data and a bitmap of the same data as it would have looked on paper. The machine data by necessity is incomplete and its interpretation dependent on external information and the application intended to process it. For example: it’s entirely possible that a prescription or a lab request contains only codes for the products or tests, while external tables that are not part of the signed data structure contain the corresponding product or test name. That means that I may put my signature on a prescription for the code for aspirin today, but which could turn into a prescription for methadon if combined with another external table, without invalidating my digital signature. If, on the other hand, the accompanying bitmap showed an oldfashioned paper prescription for aspirin, I could use that as (almost) human readable proof of what I actually signed any time in the future.

I think it’s not too much asked that the vendors get their asses moving and get this thing done.