TEDx Uppsala tickets open

I’m copying over the promotional email I got. Note that the last day to apply for a ticket is October 24, 2014.

 

A Day of Mind-Blowing Talks and Personal Stories

 TEDxUppsalaUniversity will run their second event ever at Uppsala Stadsteater on Saturday, November 15. 

 Eight local and international speakers, ranging from scientists to authors to activists to musicians, will give unique, surprising and personal talks around the theme ‘Who Cares?’. Join us for a day of brilliant talks and workshops that will blow your mind and make you wonder: what do I really care about?

 TEDx is designed and organised entirely by local volunteers in Uppsala. The independently organised event will be held in the globally recognised TED style.

 And our guests are…..

 Speakers are being added to the event throughout the whole of October, and the list is still growing. Preliminary speakers include: 
  • Sofia Falk, a former undercover agent for the military who has since devoted her life to making organisations better for women
  • Ida Lod, Stockholm-based violinist and performer will give us a musical piece with a surprising twist
  • Otto Cars will talk about the dangers of infectious diseases in today’s world.
  • Alan AtKisson will tell us “how to create a long-term global art project that can involve thousands of people and change the world”
  • Malin Forsgren, who shelved her career to build the Swedish branch of worldwide organisation Operation Smile, which now provides half a million free health checks and 20 000 cost free operations every year to children with facial deformities, will talk about how “reason leads to conclusion, but emotion leads to action”.
  • Martin Wehlou will show us how he aims to revolutionise the way doctors make their everyday decisions.
  • Jan Chilar will give a short, practical talk “what does ‘clean’ mean?” will challenge our conception of how we should use chemicals in our lives. From bathing rituals of ancient Romans to sophisticated cleaning products of contemporary society, improvements in hygiene have always gone hand in hand with increased public health… but do we need all those cleaning products? What if our grandma was right when she told us to spray vinegar all around the house?
  • More to come!

Watch TEDxUppsalaUniversity live!

If you want tickets to the live event at Uppsala Stadsteater on Saturday, November 15 can apply on the TEDx Uppsala University website www.tedxuppsalauniversity.com from Monday, September 29.

The event will also be live streamed on our website during the event for those who want to be a part of TEDx live. Get a few friends together and watch a day of brilliant ideas and personal stories. Edited videos of each talk will be posted the week after the event. Check our websites, Facebook or twitter for this release.

Lakin Anderson, Film, Marketing & Media Coordinator, TEDxUppsalaUniversity 2014, Uppsala, Sweden  lakin@tedxuppsalauniversity.com +46 (0) 760 90 58 52

Lecture

I was invited to give a lecture to the International Masters Programme in Health Informatics at Karolinska Institute, and we recorded a video of the entire lecture, in total around 3.5 hours. The last part is about iotaMed, our open source project for a “new and improved” electronic health care record, which is knowledge support, medical record, and national registries all rolled into one.

The rest of the lecture is about a lot of different things I have opinions about, and as there is no lack of things I feel strongly about, it went almost an hour longer than it should have.

The full lecture consists of 12 chapters (“parts”), each 1-4 video segments (YouTube limits videos to max 15 minutes, and that makes for a lot of dividing of videos). You can find the lecture notes here. Oh, by the way, the site for the iotaMed project is here. The playlist with all 20 videos is on YouTube here.

Solution: open the market

Now we’ve arrived at the last of the solutions in my list, namely “Opening the market for smaller entrepreneurs”. There are a number of reasons we have to do this, and I’ve touched on most of them before in other contexts.

The advantages of having a large all-in-one vendor to deliver a single system doing everything for your electronic health-care record needs are:

  • You don’t have to worry about interconnections, there aren’t any
  • You don’t have to figure out who to call when things go wrong, there’s only one vendor to call
  • You can reduce your support staff, at least you may think so initially
  • You can avoid all arguments about requirements from the users, there is nothing you can change anyway
  • It looks like good leadership to the uninitiated, just like Napoleon probably looked pretty good at Waterloo, at least to start with

The disadvantages are:

  • Since you have no escape once the decision is made, the costs are usually much higher than planned or promised
  • There is only one support organization, and they are usually pretty unpleasant to deal with, and almost always powerless to do anything
  • Any extra functionality you need must come from the same vendor, and will cost a fortune, and will always be late, bug-ridden, and wrong
  • The system will be worst-of-breed in every individual area of functionality; its only characteristic being that it is all-encompassing (like mustard gas over Ieper)
  • The system will never be based on a simple architecture or interface standards; there is no need for it, the vendor usually doesn’t have the expertise for it, and the designers have no incentives to do a quality job
  • Since quality is best measured as the simplicity and orthogonality of interfaces and public specs, and large vendors don’t deliver either of these, there is no objective measure of quality, hence there is no quality (there’s a law in there somewhere about “that which is not measurable does not exist”; was it Newton who said that?)
  • Due to poor architecture, the system will almost certainly be developed as too few and too large blocks of functionality, making them harder than necessary to maintain (yes, the vendor maintains it for you, but you pay and suffer the poor quality)

Everybody knows the proverb about power: it corrupts. Don’t give that kind of power to a single vendor, he is going to misuse it to his own advantage. It’s not a question of how nice and well-meaning the CEO is, it is his duty to screw you to the hilt. That’s what he’s being paid to do and if he doesn’t, he’ll lose his job.

But if we want the customers to choose best-of-breed solutions from smaller vendors, we have to be able to offer them these best-of-breed solutions in a way that makes it technically, economically, and politically feasible to purchase and install such solutions. Today, that is far from the case. Smaller vendors behave just like the big vendors, but with considerably less success, using most of their energy bickering about details and suing each other and the major vendors, when things don’t go as they please (which they never do). If all that energy went into making better products instead, we’d have amazingly great software by now.

The major problem is that even the smallest vendor would rather go bust trying to build a complete all-in-one system for electronic health-care records, than concede even a part of the whole to a competitor, however much better that competitor is when it comes to that part. And while the small vendors fight their little wars, the big ones run off with the prize. This has got to stop.

One way would be for the government to step in and mandate interfaces, modularity, and interconnection standards. And they do, except this approach doesn’t work. When government does this, they select projects on the advice of people whose livelihood depends on the creation of long-lived committees where they can sit forever producing documents of all kinds. So all you get is high cost, eternal committees, and no joy. Since no small vendor ever could afford to keep an influential presence on these committees, the work will never result in anything that is useful to the smaller vendors, while the large vendors don’t need any standards or rules of any kind anyway, since they only connect to themselves and love to blame the lack of useful standards for not being able to accomodate any other vendor’s systems. This way, standards consultants standardize, large vendors don’t care about the standards and keep selling, and everyone is happy except for the small vendors and, of course, the users who keep paying through the nose for very little in return.

There’s no way out of this for the small vendors and the users if you need standards to interoperate, but lucky for us, standards are largely useless and unnecessary even in the best of cases. All it takes is for one or two small vendors to publish de facto standards, simple and practical enough for most other vendors to pick up and use. I’ve personally seen this happen in Belgium in the 80’s and 90’s where a multitude of smaller EHR systems used each other’s lab and referral document standards, instead of waiting for official CEN standards, which didn’t work at all once published (see my previous blog post). In the US, standards are generally not invented by standards bodies, but selected from de facto standards in use, and then approved, which explains why US standards usually do work, while European standards don’t.

Where does all this leave us? I see only one way of getting out of this mess and that is for smaller vendors to start sharing de facto standards with each other. Which leads directly to my conclusion: everything I do with iotaMed will be open for use by others. I will define how issue templates will look and how issue worksheets and observations will be structured, but those definitions are free to use by any vendor, small or large. At the start, I reserve the right to control which documents structures and interfaces can be called “iota” and “iotaMed”, but as soon as other players take active and constructive part in all this, I fully intend to share that control. But an important reason not to let it go from the start is that I am truly afraid of a large “committee” springing up whose only interest will be to make it cost more, increase the page count, and take forever to produce results. And that, I will fight tooth and nail.

On the other hand, I’ll develop the iotaMed interface for the iPad and I intend to publish the source for that, but keep the right to sell licenses for commercial use, while non-profit use will be free. Exactly how to draw that limit needs to be defined later, but it would be a really good thing if several vendors agreed on a common set of principles, since that would make it easier for our customers to handle. A mixed license model with GPL and a regular commercial license seems to be the way to go. But in the beginning, we have to share as much as possible, so we can create a market where we all can add products and knowledge. Without boostrapping that market, there will be no products or services to sell later.

Solution: less need for standards

Around 1996 I was part of the CEN TC251 crowd for a while, not as a member but as an observer. CEN is the European standards organization, and TC251 is “Technical Committee 251”, which is the committee that does all the medical IT standardization. The reason I was involved is that I was then working as a consultant for the University of Ghent in Belgium and I had as task to create a Belgian “profile” of the “Summary of Episode of Care” standard for the Belgian market. So I participated in a number of meetings of the TC251 working groups.

For those that are in the know, I must stress that this was the “original” standards effort, all based on Edifact like structures and before the arrival of XML on the stage. I’ve heard from people that the standards that were remade in XML form are considerably more useful than the stuff we had to work with.

I remember this period in my life as a period of meeting a lot of interesting people, having a lot of fun, but at the same time being excruciatingly frustrated by overly complex and utterly useless standards. The standards I had to work with simply didn’t compute. For months I went totally bananas trying to make sense of what was so extensively documented, but never succeeded. After a serious talk with one of the chairpersons, a very honest Brit, I finally realized that nobody had ever tried out this stuff in reality and that most, maybe even all, of my complaints about inconsistencies and impossibilities were indeed real and recognized, but that it was politically impossible to publicly admit to that. Oboy…

I finally got my “profile” done by simply chucking out the whole lot and starting over again, writing the entire thing as I would have done if I’d never even heard of the standards. That version was immediately accepted and I was recently told it still is used with hardly any changes as the Belgian Prorec standard, or at least a part of it.

The major lesson I learned from the entire CEN debacle (it was a debacle for me) is that the first rule in standardization of anything is to avoid it. Don’t ever start a project that requires a pre-existing standard to survive. It won’t survive. The second rule is: if it requires a standard, it should be small and functional, not semantic. The third is: if it is a semantic standard, it should comprise a maximum of a few tens of terms. Anything beyond a hundred is useless.

It’s easy to see that these rules hold in reality. HTML is a hugely successful standard since it’s small and has just a few semantic terms, such as GET, PUT, etc. XML: the same thing holds. Snomed CT: a few hundred thousand terms… you don’t want to hear what I think of that, you’d have to wash your ears with soap afterwards.

From all my years of developing software, I’ve never ever encountered a problem that needed a standard like Snomed CT, that couldn’t just as well be solved without it. During all those years, I’ve never ever seen a project requiring such a massive standards effort as Snomed CT, actually succeed. Never. I can’t say it couldn’t happen, I’m only saying I’ve never seen it happen.

The right way to design software, in my world, is to construct everything according to your own minimal coding needs, but always keep in mind that all your software activities could be imported and exported using a standard differing from what you do internally. That is, you should make your data simple enough and flexible enough to allow the addition of a standard later. If it is ever needed. In short: given the choice between simple or standard, always choose simple.

Exactly how to do this is complex, but not complex in the way standards are, only complex in the way you need to think about it. In other words, it requires that rarest of substances, brain sweat. Let me take a few examples.

If you need to get data from external systems, and you do that in your application in the form of synchronous calls only, waiting for a reply before proceeding, you severely limit the ability of others to change the way you interact with these systems. If you instead create as many of your interactions as possible as asynchronous calls, you open up a world of easy interfacing for others.

If you use data from other systems, try to use them as opaque blocks. That is, if you need to get patient data, don’t assume you can actually read that data, but let external systems interpret them for you as much as possible. That allows other players to provide patient data you never expected, but as long as they also provide the engine to use that data, it doesn’t matter.

Every non-trivial and distinct functionality in your application should be a separate module, or even better, a separate application. That way it can be easily replaced or changed when needed. As I mentioned before, the interfaces and the module itself, will almost automatically be of better quality as well.

The most useful rule of thumb I can give you is this: if anyone proposes a project that includes the need for a standard containing more than 50 terms or so, say no. Or if you’re the kind of person who is actually making a living producing nothing but “deliverables” (as they call these stacks of unreadable documents), definitely say yes, but realize that your existence is a heavy load on humanity, and we’d all probably be better off without your efforts.

Solution: improved specifications

The quality of our IT systems for health-care is pretty darn poor, and I think most people agree on that. There have been calls for oversight and certification of applications to lessen the risk of failures and errors. In Europe there is a drive to have health-care IT solutions go through a CE process, which more or less contains a lot of documentation requirements and change control. So by doing this, the CE process certifies a part of the actual process to produce the applications. But I dare claim this isn’t very useful.

If you want to get vendors to produce better code with less bugs, there is only one thing you can do to achieve that: inspect the code directly or indirectly. Everything else is too complicated and ineffective. The only thing the CE process will achieve is more bureaucracy, more paper, slower and more laborious updates, fewer timely fixes, and higher costs. What it also will achieve, and this may be very intentional, is that only large vendors with massive overhead staff can satisfy the CE requirements, killing all smaller vendors in the process.

But back to the problem we wanted to solve, namely code quality. What should be done, at least theoretically, is actual approval of the code in the applications. The problem here is that very few people are actually qualified to judge code quality correctly, and it’s very hard to define on paper what good code should look like. So as things stand today, we are not in a position where we can mandate a certain level of code quality directly, leaving us no other choice than doing it indirectly.

I think most experienced software developers agree that the public specifications and public APIs of a product very accurately reflect the inner quality of the code. There is no reason in theory why this needs to be the case, but in practice it always is. I’ve never seen an exception to this rule. Even stronger, I can assert that a product that has no public specifications or no public API is also guaranteed to be of poor quality. Again, I’ve never seen an exception to this rule.

So instead of checking paperbased processes as CE does, let’s approve the specifications and APIs. Let the vendors subject these to approval by a public board of experts. If the specs make sense and the APIs are clean and orthogonal and seem to serve the purpose the specs describe, then it’s an easy thing to test if the product adheres to the specs and the APIs. If it does, it’s approved, and we don’t need to see the original source code at all.

There is no guarantee that you’ll catch all bad code this way, but it’s much more likely than if you use CE to do it. It also has the nice side effect of forcing all players to actually open up specs and APIs, else there’s no approval.

One thing I can tell you off the bat: the Swedish NPÖ (National Patient Summary) system would flunk such an inspection so hard. That API is the horror of horrors. Or to put it another way: if any approval process would be able to let the NPÖ pass, it’s a worthless approval process. Hmmm…. maybe we can use NPÖ as an approval process approval test? No approval process should be accepted for general use unless it totally flunked NPÖ. Sounds fine to me.

Solution: modular structure

Forcing a large application into small independent parts that have to communicate with each other using minimal interfaces is always a good thing. Large monolithic applications go “fat and lazy” when there is no requirement to keep them split up. Also, the effort to expand and maintain the applications go through the roof. We know all that since ages, but both our IT purchasers and our vendors act as if it’s raining and it’s all business as usual. Of course, monolithic means all the business goes to one vendor, making him fat and happy, and very little requirement for thinking and effort by the purchasing organizations and the IT support. Now, if IT support considered their actual task, which is not work avoidance but the implementation and support of an effective IT organization enabling the health care staff to do a better job, things would look different.

The thing is, we do absolutely need modularity. The IT applications should mimic and support medicine as it is actually practiced, and not the other way around. Medical practice should not need to adapt itself to poorly conceived IT solutions. (You recognize this tendency by the constant cries for “more training”. It’s all just so much nonsense. My advice: ignore them.)

Different specialities have different needs for note taking and guidelines. On the other hand, needs for handling of pharmaceutical prescriptions differ according to type of department. Internal medicine on a ward or outpatient differs from internal medicine as practices on an intensive care unit. Different needs also arise for appointments according to the ownership of the practice, regardless of speciality. And so on.

These differences aren’t gratuitous; they are necessary and are an inherent part of how medicine works. The attempts to eliminate these differences by large, universal systems, and their large, universal ways of working, don’t work, since they force a suboptimal way of operating on the medical profession.

Now, the defenders of the great unified systems, let’s call them “unicorn followers”, argue that medicine should adapt itself to how IT works, else the IT systems will be too expensive and complex. To the unicorn followers, we should say: yes, the IT systems will be more expensive, but the savings in medical practice will outpace them by orders of magnitude. Don’t suboptimize and try to save just on IT, that is not your job! Your job is to enable medicine to save lives and money through better healthcare, not through junk IT software and equipment.

Oh, and while we’re at it: modular systems are actually cheaper to produce and work much better, but that is something most software engineering texts explain and provide the proofs for. And you out there who purchase, or sell, or build these systems, you have read those texts, haven’t you?

In short, even the unicorn followers, once they’ve picked up a bit of the computer science of the last half century and started to worry about what their real task in the medical context actually is or should be, will undoubtedly see the light and start to specify new systems as modular in every possible aspect. The unicorn, just like the “one medical records system” is a mythical beast and will never be seen in real life. Even if it wasn’t mythical, it would get stuck everywhere with that unwieldy horn and die of hunger. Good riddance.

Solution: Issues

Ah, finally we arrive at solutions. The first in the series is the elephant in the room: issues.

Why do I say “elephant in the room”? Because when a doctor examines and treats a patient, he thinks in “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up. When he records the encounter, he records only planning, tests, therapies, and follow-up, but not the main entity, the “issue” since there is no place for it. The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue, then records everything but the issue itself.

Other actors such as national registers, extraction of epidemiological data, and all the others, all go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is, only to discard it again.

This is what I mean by the “elephant in the room”. The problem is so pervasive, so obvious, so large, that it is incomprehensible to me how this situation ever came about and why it is so difficult to convince our IT providers about the need for this entity in our systems, and the huge range of problems that its introduction would solve. Doctors in general don’t see it either, but at least they are almost to a man (and woman) extremely easy to convince. And once they see the problem and the solution, it’s as obvious to them as it is to me. This is how it looks today with all different actors trying to make sense of a highly deficient EHR (the images are taken from a presentation, and I think you may get an idea of the meaning even without my sweet voice explaining what you see):

So, after that harangue, this is my message: we need the “issue” concept in our medical systems. If the issue concept is present in the EHR, there is no need for every doctor and every actor to keep reconstructing it from derived and deficient data. It’s very easy to adapt the issue templates to cover all the needs of the different actors, simply because it turns out they are all after more or less the same thing. This only becomes obvious once you see it from the perspective of “issues”.

If we look at what exactly an “issue” consists of, we see that it is an ICD-10 code, or an ICD-10 code range, that defines the symptom or disease as such. Further, it contains a clinical guidline on how to diagnose and treat the disease, or how to further investigate it and refine the diagnosis, including differential diagnoses. It would entirely replace the usual medical records in daily use and it would best be presented on a touch operated slate device such as the iPad, simply because following links, making choices, and looking up information will be much more important than entering text. The text entry part will still be possible, but will be more of an exception than a rule.

Naturally, even though you will work with “issues” instead of regular medical records, the medical records are still produced in the background, and preexisting records are both viewable and linkable through the same interface. If you look at the bottom of my little mock-up, you’ll see tabs that bring you to the old medication list, chronological records (old style EHR), referrals, lab, etc. But in general, you’ll be working in the “issue worksheet” for most of the time, only occasionally looking up information through the other tabs.

To emphasize the radical difference between this way of working and the old EHR way of working, I made a simple mockup of the entry screen for a patient record. All you see is the patient’s name and a list of issues, some of which are subissues, or differential diagnoses that haven’t been resolved yet. For an entry screen, that is actually all we need.

You may notice that “Eric the Seafarer” had breast cancer according to this screen, which is very unusual in men. But vikings were a strange and wonderful people, so I would not jump to conclusions about that.

Oh, better late than never: click on any image for a larger resolution.

That was a rough ride

Finally, we’re past the summing up of the problems in current health-care record systems. It was long, depressing, and admittedly quite boring at times, but oh so necessary. I hope you’re all in a suitably despaired state of mind to crave some solutions to all this misery, but first let’s see what we endured.

I published 11 (!) blog posts on problems with current systems and I think I actually succeeded in presenting 11 different and independent problems, all of them highly significant, and all of them with a large impact on the quality of the health-care we provide. This last remark is highly significant: we cannot do good medicine with systems having problems like this. We don’t diagnose as well as we could, we don’t institute optimal treatments as often as we could, and people suffer needlessly due to these problems. Also, any savings we achieve in IT departments due to these miserable systems, is burned many times over in increased costs due to suboptimal health-care caused by them.

The blog posts are derived from my “master list” of problems in current systems that you can find on the iota blog. Use that list if you want to look up problems or solutions, since the very nature of a wiki implies that those pages may change and be brought up to date as time goes by and we all increase our understanding of the subject. The blog posts, however, just like fine summer weather, are but a memory.

It’s high time to do something less depressing than rehashing all that misery, so from the next post onwards, we’ll talk about solutions and how to get there. The next couple of posts will cover the different parts of the solution to all this. Yes, I think I have a solution to it, amazingly. If you think you have reason to believe that one or more of my solutions won’t work, say so. You may be right, and if you are, I may need to rethink some of the solutions.

Don’t say that “if your idea is so good, why hasn’t it already been done?”, and don’t tell me it can’t be done because “nobody listens”, “bureaucrats have all the power”, “it’s all going to hell anyway”, because it’s not true. I won’t allow it to be true and neither should you. I am personally convinced that if the “bureaucrats” really understood the problems, they would jump on the opportunity to do the right thing. So our problem is to make them listen, even if it takes another couple of hundred blog posts and projects.

Enough of that philosophizing… on to the solutions. The next five posts will be (unless I change my mind about any of them or add more):

  1. The introduction of “issues”
  2. The support of a modular structure
  3. The improvement of quality in specifications and interfaces
  4. The lessening of dependence on overly heavy standards work
  5. The opening of the market to smaller entrepreneurs

Stay tuned.

Problem: too much secretarial work

Since current electronic health-care record systems are largely text based, there is a large amount of text to be written after each encounter. In Sweden, doctors generally dictate the entry after each encounter, and a secretary then types it out. The same procedure is used for referrals, reports, and letters.

Brevity in notes is definitely something to strive for, but the dictation system lends itself to overly long notations instead. This, in turn, makes the text mass of the EHR grow faster than it otherwise would need to do, a problem I already discussed elsewhere.

Since secretaries are often overloaded with work, it takes a day or two, in the best of cases, before the notes actually show up in the EHR. It is not unusual to see a delay of two or three weeks between dictation and transcription. During that time, work is made more difficult for other doctors and staff, since the only recourse is to listen to an original sound track of the dictation, if you need the information, while it is almost impossible for anyone but a trained secretary to understand the garbled and mumbled monologue doctors are usually producing. It seems that the only thing worse than our handwriting is our dictation.

If the doctor dictated referrals, these also wait for days, or in the worst case weeks, before being sent off. Meanwhile, nothing happens and the patient is left suffering, waiting, and possibly drawing disability from the insurance as well.

The underlying problem here is that the health-care records are text based, due to the lack of a correct analysis of the problem domain. Since the user cannot interact with the object that should have been there but isn’t, the “disease”, he is forced to describe the interactions with words instead.