Q.E.D.

As I see the structure of medical knowledge and its application to patients, there are three levels:

  1. Biological science, pathology, EBM, epidemiology, etc. In other words, everything we know about human biology and pathology in the large, not at the individual level.
  2. Applications and methods that apply biological science to the individual patient, and methods using the history of the patient to search for applicable science.
  3. Knowledge about a particular patient, signs, symptoms, treatments and diagnostics that have already been performed. In short, the individual patient history.

Each of these three levels correspond to particular processes and methods, and computer applications also fit one or more of these levels. For instance, IBM Watson sits squarely in level 1, while current Electronic Healthcare Record (EHR) systems are fully in level 31.

Continue reading “Q.E.D.”

Solution: open the market

Now we’ve arrived at the last of the solutions in my list, namely “Opening the market for smaller entrepreneurs”. There are a number of reasons we have to do this, and I’ve touched on most of them before in other contexts.

The advantages of having a large all-in-one vendor to deliver a single system doing everything for your electronic health-care record needs are:

  • You don’t have to worry about interconnections, there aren’t any
  • You don’t have to figure out who to call when things go wrong, there’s only one vendor to call
  • You can reduce your support staff, at least you may think so initially
  • You can avoid all arguments about requirements from the users, there is nothing you can change anyway
  • It looks like good leadership to the uninitiated, just like Napoleon probably looked pretty good at Waterloo, at least to start with

The disadvantages are:

  • Since you have no escape once the decision is made, the costs are usually much higher than planned or promised
  • There is only one support organization, and they are usually pretty unpleasant to deal with, and almost always powerless to do anything
  • Any extra functionality you need must come from the same vendor, and will cost a fortune, and will always be late, bug-ridden, and wrong
  • The system will be worst-of-breed in every individual area of functionality; its only characteristic being that it is all-encompassing (like mustard gas over Ieper)
  • The system will never be based on a simple architecture or interface standards; there is no need for it, the vendor usually doesn’t have the expertise for it, and the designers have no incentives to do a quality job
  • Since quality is best measured as the simplicity and orthogonality of interfaces and public specs, and large vendors don’t deliver either of these, there is no objective measure of quality, hence there is no quality (there’s a law in there somewhere about “that which is not measurable does not exist”; was it Newton who said that?)
  • Due to poor architecture, the system will almost certainly be developed as too few and too large blocks of functionality, making them harder than necessary to maintain (yes, the vendor maintains it for you, but you pay and suffer the poor quality)

Everybody knows the proverb about power: it corrupts. Don’t give that kind of power to a single vendor, he is going to misuse it to his own advantage. It’s not a question of how nice and well-meaning the CEO is, it is his duty to screw you to the hilt. That’s what he’s being paid to do and if he doesn’t, he’ll lose his job.

But if we want the customers to choose best-of-breed solutions from smaller vendors, we have to be able to offer them these best-of-breed solutions in a way that makes it technically, economically, and politically feasible to purchase and install such solutions. Today, that is far from the case. Smaller vendors behave just like the big vendors, but with considerably less success, using most of their energy bickering about details and suing each other and the major vendors, when things don’t go as they please (which they never do). If all that energy went into making better products instead, we’d have amazingly great software by now.

The major problem is that even the smallest vendor would rather go bust trying to build a complete all-in-one system for electronic health-care records, than concede even a part of the whole to a competitor, however much better that competitor is when it comes to that part. And while the small vendors fight their little wars, the big ones run off with the prize. This has got to stop.

One way would be for the government to step in and mandate interfaces, modularity, and interconnection standards. And they do, except this approach doesn’t work. When government does this, they select projects on the advice of people whose livelihood depends on the creation of long-lived committees where they can sit forever producing documents of all kinds. So all you get is high cost, eternal committees, and no joy. Since no small vendor ever could afford to keep an influential presence on these committees, the work will never result in anything that is useful to the smaller vendors, while the large vendors don’t need any standards or rules of any kind anyway, since they only connect to themselves and love to blame the lack of useful standards for not being able to accomodate any other vendor’s systems. This way, standards consultants standardize, large vendors don’t care about the standards and keep selling, and everyone is happy except for the small vendors and, of course, the users who keep paying through the nose for very little in return.

There’s no way out of this for the small vendors and the users if you need standards to interoperate, but lucky for us, standards are largely useless and unnecessary even in the best of cases. All it takes is for one or two small vendors to publish de facto standards, simple and practical enough for most other vendors to pick up and use. I’ve personally seen this happen in Belgium in the 80’s and 90’s where a multitude of smaller EHR systems used each other’s lab and referral document standards, instead of waiting for official CEN standards, which didn’t work at all once published (see my previous blog post). In the US, standards are generally not invented by standards bodies, but selected from de facto standards in use, and then approved, which explains why US standards usually do work, while European standards don’t.

Where does all this leave us? I see only one way of getting out of this mess and that is for smaller vendors to start sharing de facto standards with each other. Which leads directly to my conclusion: everything I do with iotaMed will be open for use by others. I will define how issue templates will look and how issue worksheets and observations will be structured, but those definitions are free to use by any vendor, small or large. At the start, I reserve the right to control which documents structures and interfaces can be called “iota” and “iotaMed”, but as soon as other players take active and constructive part in all this, I fully intend to share that control. But an important reason not to let it go from the start is that I am truly afraid of a large “committee” springing up whose only interest will be to make it cost more, increase the page count, and take forever to produce results. And that, I will fight tooth and nail.

On the other hand, I’ll develop the iotaMed interface for the iPad and I intend to publish the source for that, but keep the right to sell licenses for commercial use, while non-profit use will be free. Exactly how to draw that limit needs to be defined later, but it would be a really good thing if several vendors agreed on a common set of principles, since that would make it easier for our customers to handle. A mixed license model with GPL and a regular commercial license seems to be the way to go. But in the beginning, we have to share as much as possible, so we can create a market where we all can add products and knowledge. Without boostrapping that market, there will be no products or services to sell later.

Solution: less need for standards

Around 1996 I was part of the CEN TC251 crowd for a while, not as a member but as an observer. CEN is the European standards organization, and TC251 is “Technical Committee 251”, which is the committee that does all the medical IT standardization. The reason I was involved is that I was then working as a consultant for the University of Ghent in Belgium and I had as task to create a Belgian “profile” of the “Summary of Episode of Care” standard for the Belgian market. So I participated in a number of meetings of the TC251 working groups.

For those that are in the know, I must stress that this was the “original” standards effort, all based on Edifact like structures and before the arrival of XML on the stage. I’ve heard from people that the standards that were remade in XML form are considerably more useful than the stuff we had to work with.

I remember this period in my life as a period of meeting a lot of interesting people, having a lot of fun, but at the same time being excruciatingly frustrated by overly complex and utterly useless standards. The standards I had to work with simply didn’t compute. For months I went totally bananas trying to make sense of what was so extensively documented, but never succeeded. After a serious talk with one of the chairpersons, a very honest Brit, I finally realized that nobody had ever tried out this stuff in reality and that most, maybe even all, of my complaints about inconsistencies and impossibilities were indeed real and recognized, but that it was politically impossible to publicly admit to that. Oboy…

I finally got my “profile” done by simply chucking out the whole lot and starting over again, writing the entire thing as I would have done if I’d never even heard of the standards. That version was immediately accepted and I was recently told it still is used with hardly any changes as the Belgian Prorec standard, or at least a part of it.

The major lesson I learned from the entire CEN debacle (it was a debacle for me) is that the first rule in standardization of anything is to avoid it. Don’t ever start a project that requires a pre-existing standard to survive. It won’t survive. The second rule is: if it requires a standard, it should be small and functional, not semantic. The third is: if it is a semantic standard, it should comprise a maximum of a few tens of terms. Anything beyond a hundred is useless.

It’s easy to see that these rules hold in reality. HTML is a hugely successful standard since it’s small and has just a few semantic terms, such as GET, PUT, etc. XML: the same thing holds. Snomed CT: a few hundred thousand terms… you don’t want to hear what I think of that, you’d have to wash your ears with soap afterwards.

From all my years of developing software, I’ve never ever encountered a problem that needed a standard like Snomed CT, that couldn’t just as well be solved without it. During all those years, I’ve never ever seen a project requiring such a massive standards effort as Snomed CT, actually succeed. Never. I can’t say it couldn’t happen, I’m only saying I’ve never seen it happen.

The right way to design software, in my world, is to construct everything according to your own minimal coding needs, but always keep in mind that all your software activities could be imported and exported using a standard differing from what you do internally. That is, you should make your data simple enough and flexible enough to allow the addition of a standard later. If it is ever needed. In short: given the choice between simple or standard, always choose simple.

Exactly how to do this is complex, but not complex in the way standards are, only complex in the way you need to think about it. In other words, it requires that rarest of substances, brain sweat. Let me take a few examples.

If you need to get data from external systems, and you do that in your application in the form of synchronous calls only, waiting for a reply before proceeding, you severely limit the ability of others to change the way you interact with these systems. If you instead create as many of your interactions as possible as asynchronous calls, you open up a world of easy interfacing for others.

If you use data from other systems, try to use them as opaque blocks. That is, if you need to get patient data, don’t assume you can actually read that data, but let external systems interpret them for you as much as possible. That allows other players to provide patient data you never expected, but as long as they also provide the engine to use that data, it doesn’t matter.

Every non-trivial and distinct functionality in your application should be a separate module, or even better, a separate application. That way it can be easily replaced or changed when needed. As I mentioned before, the interfaces and the module itself, will almost automatically be of better quality as well.

The most useful rule of thumb I can give you is this: if anyone proposes a project that includes the need for a standard containing more than 50 terms or so, say no. Or if you’re the kind of person who is actually making a living producing nothing but “deliverables” (as they call these stacks of unreadable documents), definitely say yes, but realize that your existence is a heavy load on humanity, and we’d all probably be better off without your efforts.

Solution: modular structure

Forcing a large application into small independent parts that have to communicate with each other using minimal interfaces is always a good thing. Large monolithic applications go “fat and lazy” when there is no requirement to keep them split up. Also, the effort to expand and maintain the applications go through the roof. We know all that since ages, but both our IT purchasers and our vendors act as if it’s raining and it’s all business as usual. Of course, monolithic means all the business goes to one vendor, making him fat and happy, and very little requirement for thinking and effort by the purchasing organizations and the IT support. Now, if IT support considered their actual task, which is not work avoidance but the implementation and support of an effective IT organization enabling the health care staff to do a better job, things would look different.

The thing is, we do absolutely need modularity. The IT applications should mimic and support medicine as it is actually practiced, and not the other way around. Medical practice should not need to adapt itself to poorly conceived IT solutions. (You recognize this tendency by the constant cries for “more training”. It’s all just so much nonsense. My advice: ignore them.)

Different specialities have different needs for note taking and guidelines. On the other hand, needs for handling of pharmaceutical prescriptions differ according to type of department. Internal medicine on a ward or outpatient differs from internal medicine as practices on an intensive care unit. Different needs also arise for appointments according to the ownership of the practice, regardless of speciality. And so on.

These differences aren’t gratuitous; they are necessary and are an inherent part of how medicine works. The attempts to eliminate these differences by large, universal systems, and their large, universal ways of working, don’t work, since they force a suboptimal way of operating on the medical profession.

Now, the defenders of the great unified systems, let’s call them “unicorn followers”, argue that medicine should adapt itself to how IT works, else the IT systems will be too expensive and complex. To the unicorn followers, we should say: yes, the IT systems will be more expensive, but the savings in medical practice will outpace them by orders of magnitude. Don’t suboptimize and try to save just on IT, that is not your job! Your job is to enable medicine to save lives and money through better healthcare, not through junk IT software and equipment.

Oh, and while we’re at it: modular systems are actually cheaper to produce and work much better, but that is something most software engineering texts explain and provide the proofs for. And you out there who purchase, or sell, or build these systems, you have read those texts, haven’t you?

In short, even the unicorn followers, once they’ve picked up a bit of the computer science of the last half century and started to worry about what their real task in the medical context actually is or should be, will undoubtedly see the light and start to specify new systems as modular in every possible aspect. The unicorn, just like the “one medical records system” is a mythical beast and will never be seen in real life. Even if it wasn’t mythical, it would get stuck everywhere with that unwieldy horn and die of hunger. Good riddance.

Solution: Issues

Ah, finally we arrive at solutions. The first in the series is the elephant in the room: issues.

Why do I say “elephant in the room”? Because when a doctor examines and treats a patient, he thinks in “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up. When he records the encounter, he records only planning, tests, therapies, and follow-up, but not the main entity, the “issue” since there is no place for it. The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue, then records everything but the issue itself.

Other actors such as national registers, extraction of epidemiological data, and all the others, all go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is, only to discard it again.

This is what I mean by the “elephant in the room”. The problem is so pervasive, so obvious, so large, that it is incomprehensible to me how this situation ever came about and why it is so difficult to convince our IT providers about the need for this entity in our systems, and the huge range of problems that its introduction would solve. Doctors in general don’t see it either, but at least they are almost to a man (and woman) extremely easy to convince. And once they see the problem and the solution, it’s as obvious to them as it is to me. This is how it looks today with all different actors trying to make sense of a highly deficient EHR (the images are taken from a presentation, and I think you may get an idea of the meaning even without my sweet voice explaining what you see):

So, after that harangue, this is my message: we need the “issue” concept in our medical systems. If the issue concept is present in the EHR, there is no need for every doctor and every actor to keep reconstructing it from derived and deficient data. It’s very easy to adapt the issue templates to cover all the needs of the different actors, simply because it turns out they are all after more or less the same thing. This only becomes obvious once you see it from the perspective of “issues”.

If we look at what exactly an “issue” consists of, we see that it is an ICD-10 code, or an ICD-10 code range, that defines the symptom or disease as such. Further, it contains a clinical guidline on how to diagnose and treat the disease, or how to further investigate it and refine the diagnosis, including differential diagnoses. It would entirely replace the usual medical records in daily use and it would best be presented on a touch operated slate device such as the iPad, simply because following links, making choices, and looking up information will be much more important than entering text. The text entry part will still be possible, but will be more of an exception than a rule.

Naturally, even though you will work with “issues” instead of regular medical records, the medical records are still produced in the background, and preexisting records are both viewable and linkable through the same interface. If you look at the bottom of my little mock-up, you’ll see tabs that bring you to the old medication list, chronological records (old style EHR), referrals, lab, etc. But in general, you’ll be working in the “issue worksheet” for most of the time, only occasionally looking up information through the other tabs.

To emphasize the radical difference between this way of working and the old EHR way of working, I made a simple mockup of the entry screen for a patient record. All you see is the patient’s name and a list of issues, some of which are subissues, or differential diagnoses that haven’t been resolved yet. For an entry screen, that is actually all we need.

You may notice that “Eric the Seafarer” had breast cancer according to this screen, which is very unusual in men. But vikings were a strange and wonderful people, so I would not jump to conclusions about that.

Oh, better late than never: click on any image for a larger resolution.

That was a rough ride

Finally, we’re past the summing up of the problems in current health-care record systems. It was long, depressing, and admittedly quite boring at times, but oh so necessary. I hope you’re all in a suitably despaired state of mind to crave some solutions to all this misery, but first let’s see what we endured.

I published 11 (!) blog posts on problems with current systems and I think I actually succeeded in presenting 11 different and independent problems, all of them highly significant, and all of them with a large impact on the quality of the health-care we provide. This last remark is highly significant: we cannot do good medicine with systems having problems like this. We don’t diagnose as well as we could, we don’t institute optimal treatments as often as we could, and people suffer needlessly due to these problems. Also, any savings we achieve in IT departments due to these miserable systems, is burned many times over in increased costs due to suboptimal health-care caused by them.

The blog posts are derived from my “master list” of problems in current systems that you can find on the iota blog. Use that list if you want to look up problems or solutions, since the very nature of a wiki implies that those pages may change and be brought up to date as time goes by and we all increase our understanding of the subject. The blog posts, however, just like fine summer weather, are but a memory.

It’s high time to do something less depressing than rehashing all that misery, so from the next post onwards, we’ll talk about solutions and how to get there. The next couple of posts will cover the different parts of the solution to all this. Yes, I think I have a solution to it, amazingly. If you think you have reason to believe that one or more of my solutions won’t work, say so. You may be right, and if you are, I may need to rethink some of the solutions.

Don’t say that “if your idea is so good, why hasn’t it already been done?”, and don’t tell me it can’t be done because “nobody listens”, “bureaucrats have all the power”, “it’s all going to hell anyway”, because it’s not true. I won’t allow it to be true and neither should you. I am personally convinced that if the “bureaucrats” really understood the problems, they would jump on the opportunity to do the right thing. So our problem is to make them listen, even if it takes another couple of hundred blog posts and projects.

Enough of that philosophizing… on to the solutions. The next five posts will be (unless I change my mind about any of them or add more):

  1. The introduction of “issues”
  2. The support of a modular structure
  3. The improvement of quality in specifications and interfaces
  4. The lessening of dependence on overly heavy standards work
  5. The opening of the market to smaller entrepreneurs

Stay tuned.

Problem: too much secretarial work

Since current electronic health-care record systems are largely text based, there is a large amount of text to be written after each encounter. In Sweden, doctors generally dictate the entry after each encounter, and a secretary then types it out. The same procedure is used for referrals, reports, and letters.

Brevity in notes is definitely something to strive for, but the dictation system lends itself to overly long notations instead. This, in turn, makes the text mass of the EHR grow faster than it otherwise would need to do, a problem I already discussed elsewhere.

Since secretaries are often overloaded with work, it takes a day or two, in the best of cases, before the notes actually show up in the EHR. It is not unusual to see a delay of two or three weeks between dictation and transcription. During that time, work is made more difficult for other doctors and staff, since the only recourse is to listen to an original sound track of the dictation, if you need the information, while it is almost impossible for anyone but a trained secretary to understand the garbled and mumbled monologue doctors are usually producing. It seems that the only thing worse than our handwriting is our dictation.

If the doctor dictated referrals, these also wait for days, or in the worst case weeks, before being sent off. Meanwhile, nothing happens and the patient is left suffering, waiting, and possibly drawing disability from the insurance as well.

The underlying problem here is that the health-care records are text based, due to the lack of a correct analysis of the problem domain. Since the user cannot interact with the object that should have been there but isn’t, the “disease”, he is forced to describe the interactions with words instead.

Problem: closed interfaces

Current electronic health-care systems are built as monoliths. They are constructed and sold as all-in-one solutions, largely because of the failure of care provider organizations to successfully manage collections of smaller, more specialized systems. This failure is caused by two factors:

  • The failure of the smaller vendors to cooperate and produce simple methods of supporting each other’s need for interconnection
  • The failure of IT departments at health-care institutions to actively seek out and support such best-of-breed solutions

Since the specifications are designed by the IT departments, and the negotiations with the vendors are also done by the IT department, the ultimate choice of system will be something that is primarily intended to keep the IT department budget within bounds. If that involves having less functionality for the ultimate users, that is not something the IT department is aware of. It also has no incentive to find out.

What appears to us as a problem of closed interfaces is then rooted in a deeper problem, namely that closed interfaces is exactly what the current IT departments wish for. First and foremost, they do not wish to have any open feature that enables the medical departments to ask for, and possibly get, the smaller best-of-breed systems they need for clinical care, since it would often involve committing more resources for IT configuration and support.

Ultimately, this is a problem of priorities. Currently, savings of IT department resources are clearly prioritized above the needs for better IT support on the floor. I find it very hard to believe that the savings achieved in the IT departments of our health-care institutions, if any, is anywhere near the cost to health-care in the form of delayed diagnoses, increased pain and suffering, and increased insurance costs. As long as the authorities let IT departments scrimp on medical IT support by specifying solutions that inhibits any attempts at improving health-care IT beyond what the all-in-one vendor deigns to produce, we will not be able to improve health-care by better IT use. This is basically a political problem.

Problem: confidentiality

Electronic health-care records must implement and respect confidentiality settings, such that certain care givers will not be able to view information that the patient may not want them to. There are many aspects to this problem, such as if the doctor should be able to break these confidentiality barriers in emergency situations, if the existence of hidden data should be indicated, and so on, but the only problem we will discuss in this section is to what entity the confidentiality is applied.

Currently, in Sweden at least, confidentiality is applied to entities such as care provider, document, referral, record notations, warnings, and prescriptions. Exactly which elements you can apply confidentiality attributes to, varies according to EHR system vendor.

The reason confidentiality has to be applied to all these peripheral data types is because the element you actually need to apply confidentiality to, the disease as such, does not exist in these systems. This explains both why there are so many complicated rules and elements and why it will always result in error.

Applying confidentiality causes different problems in each of these cases. A very short list of problems and dangers follows:

  • If applied to care provider, any non-confidential actions performed by that care provider will be hidden behind the confidentiality shroud as well. Also, any action taken by any other, non-included care provider for the confidential disease, will not be hidden and will divulge the existence of the condition. Something as simple as a general practitioner renewing a prescription for a drug used in schizophrenia, for example, will divulge the existence of the condition, regardless of the wishes of the patient.
  • If applied to prescriptions, the doctor will be unable to check for interactions and contraindications, resulting in outright danger to the patient. If the patient develops liver disease or renal failure or any other condition that would make a prescribed but confidential medication dangerous, the doctor will not know about that. The current EHR systems are also unable to warn for contraindications in these situations, since they have no concept of diseases or contraindications.
  • If applied to warnings, such as chronic infections, it puts medical staff at risk by not alerting them to the necessity of taking extra precautions while working with the patient
  • If applied to investigations, referrals, lab results, etc, then the same kind of dangers occur. Since the EHR is unaware of diseases as concepts, it is also totally unable to draw any conclusions on its own and issue warnings for possible medical errors due to the hiding of information.

The only medically responsible conclusion you can draw is that it is dangerous to apply any confidentiality of any kind to current medical health-care records. There is no way these systems can hide information in a way that makes the risk of serious error low enough to be acceptable. Sadly, the law mandates these confidentiality mechanisms, clearly prioritizing the patient’s right to confidentiality above medical safety. Personally, I don’t know if this is right or wrong, but the lawmaker is placed in the unenviable position of having to make such a choice by the unnecessarily poor design of current EHR systems.

Problem: too much text

This post is part of a series detailing the problems of current electronic healthcare records. To orient yourself, you can start at the index page on “presentation” on the iota wiki. You will find this and other pages on that wiki as well. The wiki pages will be continuously updated.

Current electronic health-care record systems contain huge amounts of textual data without any useful structure. The mass of text in itself becomes a big problem when there is no structure, standing in the way of the reader. Since there is no useful structure, the entire text must be read and internalized at every encounter where the patient is new to the doctor or if he hasn’t seen the patient recently, which in primary care is most encounters. If the amount of text could be reduced to the absolute essentials, then the reader might have a chance of getting through it, but we passed that point a while ago in most systems. That was a point of no return, as far as current EHR systems are concerned.

A number of initiatives coming from the outside of direct patient care result in an additional increase of the amount of non-essential or duplicated text. For example, a number of quality register initiatives and the “lifestyle” questionnaires add large amounts of text to the EHR, further obscuring the useful text we’re looking for.

Once a certain point is reached where the amount of text becomes too large to read before each encounter, several things happen, all of them bad:

  • The reader gives up on the EHR text and asks the patient instead. Since he needs to allow time for the patient to answer, he actually reads considerably less of the EHR than he did before. Most doctors I asked admitted to often reading just the one or two most recent entries before simply asking the patient for a short history.
  • Once the doctor does understand the essentials of the patient’s history, he usually summarizes the history and adds it to his own journal entry, so as not to lose it. This isn’t as useful as it may seem since it will scroll away beyond the “reading horizon” of two entries pretty soon. Once it has scrolled away beyond the “reading horizon” it isn’t useful anymore and only contributes to the enormous mass of unstructured text, reinforcing the vicious circle.
  • Not knowing what’s in a medical record is very stressful and tends to make doctors more defensive in writing new notes. These notes tend to be more elaborate than they need to be, once more further contributing to the mass of text.

In other words, as the EHR looks today, it is of critical importance to maintain the absolute minimum of text in the records and to avoid duplication and unrelated information as much as humanly possible. A disturbing number of government initiatives, however, result in exactly the opposite effect, interconnecting EHR systems over large geographic areas while at the same time sprinkling the EHR text mass with redundancies and irrelevancies to a degree that is outright dangerous.