Method forwarding in Objective-C

I scraped together the following from a number of sources and got it to work fine and without compiler warnings. Interesting links are, among others:

http://cocoawithlove.com/2008/03/construct-nsinvocation-for-any-message.html
http://macdevelopertips.com/objective-c/objective-c-categories.html

But, I get ahead of myself. The idea here is to have one object forward method invocations to another. Typically, you have a class responding to method A, then you decide to make an instance of that class a member of an outer class and any messages for the methods sent to the outer class should be forwarded to the inner class if that’s where they are to be handled. And, of course, you don’t want to sit and write all these methods again in the outer class just so the inner class gets called. An example:

I have a class I call IotaDOM2TableProxy which has a method “rowsInSection:”. The class IssueWorksheet holds an instance of IotaDOM2TableProxy as a property. I now want any “rowsInSection” messages sent to the outer IssueWorksheet to be forwarded to the inner IotaDOM2TableProxy object without having to declare any “rowsInSection” method for the IssueWorksheet class. This is how you do it.

Firstly: the target, the IotaDOM2TableProxy class, needs no modification at all. It never knows what hit it, or rather, where the invocations come from. So we’ll not say anything more about it than that it contains the declaration of the “rowsInSection:” method (among a load of other stuff):

@interface IotaDOM2TableProxy : NSObject {
...
}
- (NSUInteger)rowsInSection:(NSUInteger)section;
@end

The IssueWorksheet class holds an instance of the IotaDOM2TableProxy, and does not declare any method called “rowsInSection”:

@interface IssueWorksheet : NSObject {
    IotaDOM2TableProxy *dom2tableProxy;
}
@property (nonatomic, retain) IotaDOM2TableProxy *dom2tableProxy;
@end

To make IssueWorksheet forward any calls it doesn’t know about to IotaDOM2TableProxy, we have to override three methods in IssueWorksheet:

- (BOOL)respondsToSelector:(SEL)aSelector {
    return [super respondsToSelector:aSelector] || 
           [self.dom2tableProxy respondsToSelector:aSelector];
}

- (NSMethodSignature *)methodSignatureForSelector:(SEL)aSelector {
    NSMethodSignature *sig;
    sig = [super methodSignatureForSelector:aSelector];
    if (sig == nil)
        sig = [self.dom2tableProxy methodSignatureForSelector:aSelector];
    return sig;
}

- (void)forwardInvocation:(NSInvocation *)invocation {
    SEL aSelector = [invocation selector];
    if ([self.dom2tableProxy respondsToSelector:aSelector])
        [invocation invokeWithTarget:self.dom2tableProxy];
    else
        [self doesNotRecognizeSelector:aSelector];
}

And this works beautifully for one, two, or any number of methods to forward. There’s just one problem: the compiler complains that IssueWorksheet “may not respond to method…” and we should always eliminate warnings. The easiest way is to declare the forwarded methods in a category, which only mildly defeats the purpose of the exercise. But it works and is safe. You do that by adding the category after the interface declaration of IssueWorksheet, so the header file will now look as follows in its entirety:

//
//  IssueWorksheet.h
//  iotaPad1
//
//  Created by Martin on 2010-06-08.
//

#import 
#import "IotaDOM2TableProxy.h"

@interface IssueWorksheet : NSObject {
    IotaDOM2TableProxy *dom2tableProxy;
}

@property (nonatomic, retain) IotaDOM2TableProxy *dom2tableProxy;

- (id)initFromFile:(NSString *)name ofType:(NSString *)type;


@end

@interface IssueWorksheet (ForwardedMethods) 
- (NSUInteger)rowsInSection:(NSUInteger)section;
@end

In the above, I call the category “ForwardedMethods”, but you can call it anything you like. The name doesn’t matter and isn’t referenced anywhere else, but the category must have a unique name of some sort.

Please note: there is no implementation in the .m file for “rowsInSection”. The category definition suffices to shut the compiler up.

Solution: open the market

Now we’ve arrived at the last of the solutions in my list, namely “Opening the market for smaller entrepreneurs”. There are a number of reasons we have to do this, and I’ve touched on most of them before in other contexts.

The advantages of having a large all-in-one vendor to deliver a single system doing everything for your electronic health-care record needs are:

  • You don’t have to worry about interconnections, there aren’t any
  • You don’t have to figure out who to call when things go wrong, there’s only one vendor to call
  • You can reduce your support staff, at least you may think so initially
  • You can avoid all arguments about requirements from the users, there is nothing you can change anyway
  • It looks like good leadership to the uninitiated, just like Napoleon probably looked pretty good at Waterloo, at least to start with

The disadvantages are:

  • Since you have no escape once the decision is made, the costs are usually much higher than planned or promised
  • There is only one support organization, and they are usually pretty unpleasant to deal with, and almost always powerless to do anything
  • Any extra functionality you need must come from the same vendor, and will cost a fortune, and will always be late, bug-ridden, and wrong
  • The system will be worst-of-breed in every individual area of functionality; its only characteristic being that it is all-encompassing (like mustard gas over Ieper)
  • The system will never be based on a simple architecture or interface standards; there is no need for it, the vendor usually doesn’t have the expertise for it, and the designers have no incentives to do a quality job
  • Since quality is best measured as the simplicity and orthogonality of interfaces and public specs, and large vendors don’t deliver either of these, there is no objective measure of quality, hence there is no quality (there’s a law in there somewhere about “that which is not measurable does not exist”; was it Newton who said that?)
  • Due to poor architecture, the system will almost certainly be developed as too few and too large blocks of functionality, making them harder than necessary to maintain (yes, the vendor maintains it for you, but you pay and suffer the poor quality)

Everybody knows the proverb about power: it corrupts. Don’t give that kind of power to a single vendor, he is going to misuse it to his own advantage. It’s not a question of how nice and well-meaning the CEO is, it is his duty to screw you to the hilt. That’s what he’s being paid to do and if he doesn’t, he’ll lose his job.

But if we want the customers to choose best-of-breed solutions from smaller vendors, we have to be able to offer them these best-of-breed solutions in a way that makes it technically, economically, and politically feasible to purchase and install such solutions. Today, that is far from the case. Smaller vendors behave just like the big vendors, but with considerably less success, using most of their energy bickering about details and suing each other and the major vendors, when things don’t go as they please (which they never do). If all that energy went into making better products instead, we’d have amazingly great software by now.

The major problem is that even the smallest vendor would rather go bust trying to build a complete all-in-one system for electronic health-care records, than concede even a part of the whole to a competitor, however much better that competitor is when it comes to that part. And while the small vendors fight their little wars, the big ones run off with the prize. This has got to stop.

One way would be for the government to step in and mandate interfaces, modularity, and interconnection standards. And they do, except this approach doesn’t work. When government does this, they select projects on the advice of people whose livelihood depends on the creation of long-lived committees where they can sit forever producing documents of all kinds. So all you get is high cost, eternal committees, and no joy. Since no small vendor ever could afford to keep an influential presence on these committees, the work will never result in anything that is useful to the smaller vendors, while the large vendors don’t need any standards or rules of any kind anyway, since they only connect to themselves and love to blame the lack of useful standards for not being able to accomodate any other vendor’s systems. This way, standards consultants standardize, large vendors don’t care about the standards and keep selling, and everyone is happy except for the small vendors and, of course, the users who keep paying through the nose for very little in return.

There’s no way out of this for the small vendors and the users if you need standards to interoperate, but lucky for us, standards are largely useless and unnecessary even in the best of cases. All it takes is for one or two small vendors to publish de facto standards, simple and practical enough for most other vendors to pick up and use. I’ve personally seen this happen in Belgium in the 80’s and 90’s where a multitude of smaller EHR systems used each other’s lab and referral document standards, instead of waiting for official CEN standards, which didn’t work at all once published (see my previous blog post). In the US, standards are generally not invented by standards bodies, but selected from de facto standards in use, and then approved, which explains why US standards usually do work, while European standards don’t.

Where does all this leave us? I see only one way of getting out of this mess and that is for smaller vendors to start sharing de facto standards with each other. Which leads directly to my conclusion: everything I do with iotaMed will be open for use by others. I will define how issue templates will look and how issue worksheets and observations will be structured, but those definitions are free to use by any vendor, small or large. At the start, I reserve the right to control which documents structures and interfaces can be called “iota” and “iotaMed”, but as soon as other players take active and constructive part in all this, I fully intend to share that control. But an important reason not to let it go from the start is that I am truly afraid of a large “committee” springing up whose only interest will be to make it cost more, increase the page count, and take forever to produce results. And that, I will fight tooth and nail.

On the other hand, I’ll develop the iotaMed interface for the iPad and I intend to publish the source for that, but keep the right to sell licenses for commercial use, while non-profit use will be free. Exactly how to draw that limit needs to be defined later, but it would be a really good thing if several vendors agreed on a common set of principles, since that would make it easier for our customers to handle. A mixed license model with GPL and a regular commercial license seems to be the way to go. But in the beginning, we have to share as much as possible, so we can create a market where we all can add products and knowledge. Without boostrapping that market, there will be no products or services to sell later.

Solution: less need for standards

Around 1996 I was part of the CEN TC251 crowd for a while, not as a member but as an observer. CEN is the European standards organization, and TC251 is “Technical Committee 251”, which is the committee that does all the medical IT standardization. The reason I was involved is that I was then working as a consultant for the University of Ghent in Belgium and I had as task to create a Belgian “profile” of the “Summary of Episode of Care” standard for the Belgian market. So I participated in a number of meetings of the TC251 working groups.

For those that are in the know, I must stress that this was the “original” standards effort, all based on Edifact like structures and before the arrival of XML on the stage. I’ve heard from people that the standards that were remade in XML form are considerably more useful than the stuff we had to work with.

I remember this period in my life as a period of meeting a lot of interesting people, having a lot of fun, but at the same time being excruciatingly frustrated by overly complex and utterly useless standards. The standards I had to work with simply didn’t compute. For months I went totally bananas trying to make sense of what was so extensively documented, but never succeeded. After a serious talk with one of the chairpersons, a very honest Brit, I finally realized that nobody had ever tried out this stuff in reality and that most, maybe even all, of my complaints about inconsistencies and impossibilities were indeed real and recognized, but that it was politically impossible to publicly admit to that. Oboy…

I finally got my “profile” done by simply chucking out the whole lot and starting over again, writing the entire thing as I would have done if I’d never even heard of the standards. That version was immediately accepted and I was recently told it still is used with hardly any changes as the Belgian Prorec standard, or at least a part of it.

The major lesson I learned from the entire CEN debacle (it was a debacle for me) is that the first rule in standardization of anything is to avoid it. Don’t ever start a project that requires a pre-existing standard to survive. It won’t survive. The second rule is: if it requires a standard, it should be small and functional, not semantic. The third is: if it is a semantic standard, it should comprise a maximum of a few tens of terms. Anything beyond a hundred is useless.

It’s easy to see that these rules hold in reality. HTML is a hugely successful standard since it’s small and has just a few semantic terms, such as GET, PUT, etc. XML: the same thing holds. Snomed CT: a few hundred thousand terms… you don’t want to hear what I think of that, you’d have to wash your ears with soap afterwards.

From all my years of developing software, I’ve never ever encountered a problem that needed a standard like Snomed CT, that couldn’t just as well be solved without it. During all those years, I’ve never ever seen a project requiring such a massive standards effort as Snomed CT, actually succeed. Never. I can’t say it couldn’t happen, I’m only saying I’ve never seen it happen.

The right way to design software, in my world, is to construct everything according to your own minimal coding needs, but always keep in mind that all your software activities could be imported and exported using a standard differing from what you do internally. That is, you should make your data simple enough and flexible enough to allow the addition of a standard later. If it is ever needed. In short: given the choice between simple or standard, always choose simple.

Exactly how to do this is complex, but not complex in the way standards are, only complex in the way you need to think about it. In other words, it requires that rarest of substances, brain sweat. Let me take a few examples.

If you need to get data from external systems, and you do that in your application in the form of synchronous calls only, waiting for a reply before proceeding, you severely limit the ability of others to change the way you interact with these systems. If you instead create as many of your interactions as possible as asynchronous calls, you open up a world of easy interfacing for others.

If you use data from other systems, try to use them as opaque blocks. That is, if you need to get patient data, don’t assume you can actually read that data, but let external systems interpret them for you as much as possible. That allows other players to provide patient data you never expected, but as long as they also provide the engine to use that data, it doesn’t matter.

Every non-trivial and distinct functionality in your application should be a separate module, or even better, a separate application. That way it can be easily replaced or changed when needed. As I mentioned before, the interfaces and the module itself, will almost automatically be of better quality as well.

The most useful rule of thumb I can give you is this: if anyone proposes a project that includes the need for a standard containing more than 50 terms or so, say no. Or if you’re the kind of person who is actually making a living producing nothing but “deliverables” (as they call these stacks of unreadable documents), definitely say yes, but realize that your existence is a heavy load on humanity, and we’d all probably be better off without your efforts.

Solution: improved specifications

The quality of our IT systems for health-care is pretty darn poor, and I think most people agree on that. There have been calls for oversight and certification of applications to lessen the risk of failures and errors. In Europe there is a drive to have health-care IT solutions go through a CE process, which more or less contains a lot of documentation requirements and change control. So by doing this, the CE process certifies a part of the actual process to produce the applications. But I dare claim this isn’t very useful.

If you want to get vendors to produce better code with less bugs, there is only one thing you can do to achieve that: inspect the code directly or indirectly. Everything else is too complicated and ineffective. The only thing the CE process will achieve is more bureaucracy, more paper, slower and more laborious updates, fewer timely fixes, and higher costs. What it also will achieve, and this may be very intentional, is that only large vendors with massive overhead staff can satisfy the CE requirements, killing all smaller vendors in the process.

But back to the problem we wanted to solve, namely code quality. What should be done, at least theoretically, is actual approval of the code in the applications. The problem here is that very few people are actually qualified to judge code quality correctly, and it’s very hard to define on paper what good code should look like. So as things stand today, we are not in a position where we can mandate a certain level of code quality directly, leaving us no other choice than doing it indirectly.

I think most experienced software developers agree that the public specifications and public APIs of a product very accurately reflect the inner quality of the code. There is no reason in theory why this needs to be the case, but in practice it always is. I’ve never seen an exception to this rule. Even stronger, I can assert that a product that has no public specifications or no public API is also guaranteed to be of poor quality. Again, I’ve never seen an exception to this rule.

So instead of checking paperbased processes as CE does, let’s approve the specifications and APIs. Let the vendors subject these to approval by a public board of experts. If the specs make sense and the APIs are clean and orthogonal and seem to serve the purpose the specs describe, then it’s an easy thing to test if the product adheres to the specs and the APIs. If it does, it’s approved, and we don’t need to see the original source code at all.

There is no guarantee that you’ll catch all bad code this way, but it’s much more likely than if you use CE to do it. It also has the nice side effect of forcing all players to actually open up specs and APIs, else there’s no approval.

One thing I can tell you off the bat: the Swedish NPÖ (National Patient Summary) system would flunk such an inspection so hard. That API is the horror of horrors. Or to put it another way: if any approval process would be able to let the NPÖ pass, it’s a worthless approval process. Hmmm…. maybe we can use NPÖ as an approval process approval test? No approval process should be accepted for general use unless it totally flunked NPÖ. Sounds fine to me.

Solution: modular structure

Forcing a large application into small independent parts that have to communicate with each other using minimal interfaces is always a good thing. Large monolithic applications go “fat and lazy” when there is no requirement to keep them split up. Also, the effort to expand and maintain the applications go through the roof. We know all that since ages, but both our IT purchasers and our vendors act as if it’s raining and it’s all business as usual. Of course, monolithic means all the business goes to one vendor, making him fat and happy, and very little requirement for thinking and effort by the purchasing organizations and the IT support. Now, if IT support considered their actual task, which is not work avoidance but the implementation and support of an effective IT organization enabling the health care staff to do a better job, things would look different.

The thing is, we do absolutely need modularity. The IT applications should mimic and support medicine as it is actually practiced, and not the other way around. Medical practice should not need to adapt itself to poorly conceived IT solutions. (You recognize this tendency by the constant cries for “more training”. It’s all just so much nonsense. My advice: ignore them.)

Different specialities have different needs for note taking and guidelines. On the other hand, needs for handling of pharmaceutical prescriptions differ according to type of department. Internal medicine on a ward or outpatient differs from internal medicine as practices on an intensive care unit. Different needs also arise for appointments according to the ownership of the practice, regardless of speciality. And so on.

These differences aren’t gratuitous; they are necessary and are an inherent part of how medicine works. The attempts to eliminate these differences by large, universal systems, and their large, universal ways of working, don’t work, since they force a suboptimal way of operating on the medical profession.

Now, the defenders of the great unified systems, let’s call them “unicorn followers”, argue that medicine should adapt itself to how IT works, else the IT systems will be too expensive and complex. To the unicorn followers, we should say: yes, the IT systems will be more expensive, but the savings in medical practice will outpace them by orders of magnitude. Don’t suboptimize and try to save just on IT, that is not your job! Your job is to enable medicine to save lives and money through better healthcare, not through junk IT software and equipment.

Oh, and while we’re at it: modular systems are actually cheaper to produce and work much better, but that is something most software engineering texts explain and provide the proofs for. And you out there who purchase, or sell, or build these systems, you have read those texts, haven’t you?

In short, even the unicorn followers, once they’ve picked up a bit of the computer science of the last half century and started to worry about what their real task in the medical context actually is or should be, will undoubtedly see the light and start to specify new systems as modular in every possible aspect. The unicorn, just like the “one medical records system” is a mythical beast and will never be seen in real life. Even if it wasn’t mythical, it would get stuck everywhere with that unwieldy horn and die of hunger. Good riddance.

Solution: Issues

Ah, finally we arrive at solutions. The first in the series is the elephant in the room: issues.

Why do I say “elephant in the room”? Because when a doctor examines and treats a patient, he thinks in “issues”, and the result of that thinking manifests itself in planning, tests, therapies, and follow-up. When he records the encounter, he records only planning, tests, therapies, and follow-up, but not the main entity, the “issue” since there is no place for it. The next doctor that sees the patient needs to read about the planning, tests, therapies, and follow-up and then mentally reverse-engineer the process to arrive at which issue is ongoing. Again, he manages the patient according to that issue, then records everything but the issue itself.

Other actors such as national registers, extraction of epidemiological data, and all the others, all go through the same process. They all have their own methods of churning through planning, tests, therapies, and follow-up, to reverse-engineer the data in order to arrive at what the issue is, only to discard it again.

This is what I mean by the “elephant in the room”. The problem is so pervasive, so obvious, so large, that it is incomprehensible to me how this situation ever came about and why it is so difficult to convince our IT providers about the need for this entity in our systems, and the huge range of problems that its introduction would solve. Doctors in general don’t see it either, but at least they are almost to a man (and woman) extremely easy to convince. And once they see the problem and the solution, it’s as obvious to them as it is to me. This is how it looks today with all different actors trying to make sense of a highly deficient EHR (the images are taken from a presentation, and I think you may get an idea of the meaning even without my sweet voice explaining what you see):

So, after that harangue, this is my message: we need the “issue” concept in our medical systems. If the issue concept is present in the EHR, there is no need for every doctor and every actor to keep reconstructing it from derived and deficient data. It’s very easy to adapt the issue templates to cover all the needs of the different actors, simply because it turns out they are all after more or less the same thing. This only becomes obvious once you see it from the perspective of “issues”.

If we look at what exactly an “issue” consists of, we see that it is an ICD-10 code, or an ICD-10 code range, that defines the symptom or disease as such. Further, it contains a clinical guidline on how to diagnose and treat the disease, or how to further investigate it and refine the diagnosis, including differential diagnoses. It would entirely replace the usual medical records in daily use and it would best be presented on a touch operated slate device such as the iPad, simply because following links, making choices, and looking up information will be much more important than entering text. The text entry part will still be possible, but will be more of an exception than a rule.

Naturally, even though you will work with “issues” instead of regular medical records, the medical records are still produced in the background, and preexisting records are both viewable and linkable through the same interface. If you look at the bottom of my little mock-up, you’ll see tabs that bring you to the old medication list, chronological records (old style EHR), referrals, lab, etc. But in general, you’ll be working in the “issue worksheet” for most of the time, only occasionally looking up information through the other tabs.

To emphasize the radical difference between this way of working and the old EHR way of working, I made a simple mockup of the entry screen for a patient record. All you see is the patient’s name and a list of issues, some of which are subissues, or differential diagnoses that haven’t been resolved yet. For an entry screen, that is actually all we need.

You may notice that “Eric the Seafarer” had breast cancer according to this screen, which is very unusual in men. But vikings were a strange and wonderful people, so I would not jump to conclusions about that.

Oh, better late than never: click on any image for a larger resolution.

Problem: no searcheability

This post is part of a series detailing the problems of current electronic healthcare records. To orient yourself, you can start at the index page on “presentation” on the iota wiki. You will find this and other pages on that wiki as well. The wiki pages will be continuously updated.

Since current electronic health care record systems have no knowledge of diseases as entities, we can’t drill down into the structure to locate necessary detail about the diagnosis or treatment of that disease. The information about diseases is spread out in the huge block of text that forms the EHR for a patient, so the only way to locate information in that is by searching on particular words or terms one could hope is related to the treatments one is looking for. Interestingly, not one of the EHR systems the author has seen has implemented even the most rudimentary search abilities such as “Find”. It’s hard to believe that these multimillion dollar systems don’t even have the features the lowly “Notepad” app has had since the inception of Windows in the 80’s, but that is the case. This leaves us with nothing else than eyeballing all the text manually, from start to finish. A clearly absurd state of affairs.

Search, even if implemented right, helps only in some edge cases. If you look for a reason why a certain medication was given or not given, it can help. If you look for treatments for a known disease, it can also help, but if you look for issues in the patient’s history that you don’t know about yet (the most important and most frequent search we do in an EHR in primary care), you’re out of luck even with a search function since you don’t know what you are searching for. A search can fill the function of an index, but not of a table of contents.

A summary of contents could be in the form of a “tag cloud”, but no EHR the author is aware of has even attempted to implement any such feature. Implementing a “tag cloud” of terms used in a medical record, if done right and with taste, could make the search problem somewhat more tractable by making it easier to navigate the old unstructured information from current EHR systems. It would not by any means replace “issues” as a structure, but would be helpful when linking legacy information to “issues” in a modern iotaMed based EHR.

In specialist care, the balance is somewhat different. Since searching for unrelated diseases is less frequent, a search on words or terms is relatively more important (one more often knows what one is looking for) and both a “Find” function and a “tag cloud” are even more sorely missed than in primary care. Even though both of these functions would be very useful, their usefulness arises from the fact that the lack of “issues” makes the EHR information such a mess to begin with. In the presence of “issues”, there would rarely be a reason to do a free search at all over an EHR, since information would be found in the place where it belongs.

This is not a reason to cast aside “Find” and “tag clouds” even for specialist care, since the legacy data in current EHR systems will be with us for a very long time still, before it all can be linked up with “issues” and brought into an “issue”-based structure. And even then, in that far future, “Find” and “tag clouds” will be essential tools, albeit not anymore the only tools to aid in the comprehension of the medical record.

Problem: no contraindications

Electronic health care record systems are widely and frequently claimed to reduce injury and death due to prescription errors, since they are able to detect and warn for interactions between products. This claim is largely nonsense, because of the following:

  • Interactions between products are not the only dangerous effects we have from bad prescriptions
  • Interactions aren’t even among the most important dangers
  • Most doctors know of most important interactions and do not generally need these warnings
  • Most warnings we do get from these systems are hilarious or tragic, or plain boring, depending on your mood. They are rarely useful.

Most of these warning systems actually contribute to the danger instead of reducing it, simply because of their existence. It’s too easy for trusting young (or old) doctors to rely on the system to warn for interactions, creating a habit of optimistic prescribing. These users don’t realize how bad these systems generally are, so they adopt dangerous behavior and simply try out prescriptions just to see if they get warnings. If not, they prescribe.

The real problem, however, is contraindications. The presence of certain diseases makes it dangerous to administer certain therapies. A number of classes of medications should not be used if the patient is pregnant or has an enlarged prostate, certain cardiac arrythmias, or glaucoma of the eye, just to name a few examples. Some of these contraindications are deadly and most are hard to remember and easy to miss. But nobody talks about them, since our EHR systems, due to the lack of the concept of “disease” in some form, are totally unable to check for contraindications. Instead, the vendors praise the abilities of their interaction warnings, which in actual fact are near to nothing.

You can find this page on the iotaMed wiki. You can discuss it on Vard IT Forum, where you can find my blog posts published a day or more ahead of time.

Problem: no connection between prescriptions and diseases

Prescriptions made in classic EHR systems have no context of “why” and “for what”. The only information about the reason for the prescription is an entirely optional field on the physical label informing the patient about the purpose of the medication. That is far from sufficient. The drawbacks due to the lack of a structured identification of the indication for the prescription are:

  • It’s harder to know when a medications is not needed anymore
  • It’s impossible to calculate recommended dosages and length of treatment
  • If recommendations change for certain diseases, the prescriptions cannot be tracked
  • Dangerous omissions due to confidentiality

Hard to know when not needed anymore

If a patient has multiple diseases and some medications may serve for more than one treatment, it’s sometimes very hard to figure out for which of these diseases it was originally intended. This way, it’s harder to know which prescriptions to review when one disease is cured or deemed resolved in some other way. Additionally, it’s a great time waster, since no doctor knows all indications of all medications by heart, so the lack of a formal link forces one to either ignore the medication and hope for the best, or waste time looking it up in a database, wade through the indications and check them against the possible diseases the patient may have, if you know them.

There is also good reason to assume that too many prescriptions are continued beyond their point of usefulness, simply because it’s too difficult to go through the review process. As long as they don’t cause actual harm, it’s easier to just leave them unchanged.

Impossible to calculate dosage and duration

Many medications have different recommended dosages and treatment durations depending on indication, and often depending on concomitant disease. For instance, antibiotics are often given with higher dosage and longer durations if it’s a recurrent infection or depending on locale. Some medications must be reduced in the presence of kidney or liver insufficiency. Without a disease concept in the EHR, none of these checks and calculations can be automated, even though they are well documented in pharmacological databases.

If recommendations change, therapies cannot be tracked

If the indication or recommended dosage for a particular medication changes, there is no way to find the cases where this recommendation change should be applied. The only method we have today is looking up every patient that gets the product and manually check the EHR to see if the change should apply. Needless to say, this is very rarely done, leaving far too many patients on treatment modalities that are outdated, and sometimes even detrimental to them.

Absurd and dangerous omissions due to confidentiality

Confidentiality in the medical records is currently applied to care givers, so that information originating from “hidden” caregivers is not shown to other caregivers without the patient’s permission. Prescriptions also have to be subjected to these rules, causing twisted and dangerous situations to occur. Examples:

  • Medication started by a “confidential” source becomes confidential. One can argue about the safety of this, but at least it’s consistent with the intention of the law.
  • Medication started by a “non-confidential” source is visible to others, but any changes to that medication such as an increase or decrease in dosage done by a “confidential” source is invisible, so the “non-confidential” source only sees the unchanged, erroneous, dosage. This is outright dangerous.

If the patient has a disease that is marked “confidential” but that forms a contraindication for some medications, there is nothing the EHR can do to prevent serious medical errors from occurring due to this confidentiality. The EHR has no “idea” exactly what is held confidential and which consequences this may have for other therapies, so it actively promotes dangerous prescription behavior.

Problem: lack of connection to clinical guidelines

I’m at point 2 in the list of problems we need to solve. You can also find this text, possibly improved, on the iotaMed wiki.

As new discoveries are made in medicine, we need to get these out to “the factory floor”, so they are applied in practice. If there’s a new more efficient diagnostic method, or a treatment that cures more people with less side effects, we want the medical authorities to review this new knowledge as soon as possible, weigh costs against benefits, and then have it applied to clinical practice as soon as possible. This review then results in recommendations in a form that many prefer to call “clinical guidelines”. These guidelines strive to be a practical implementation of current knowledge, including diagnostic criteria, recommended diagnostic methods, recommended treatments, caveats, differential diagnoses (other possible diagnoses that should be considered), etc. The best clinical guidelines are regionalized and contain telephone numbers to individuals to consult, links to forms to use and more. Finally, we have to find a way to distribute these clinical guidelines in a way that they are effectively used in practice.

A recent review showed, however, that the average time between the research expenditure of a new medical discovery and the effective health benefits of it, is 17 years [1]. An important part of that is the delay between publication and actual application of that knowledge by physicians.

The classic solution to the problem of disseminating new discoveries is training, or CPEs (Continuing Professional Education). But that is highly inefficient for doctors for a number of reasons, including:

  • There are more relevant subjects to be trained in for a medical doctor than there are opportunities to get training, so it’s largely a crap shoot if you will be trained in something you’ll need often in practice
  • Knowledge fades if not used, so it’s even more of a crap shoot if you’ve been recently trained in a subject you need today

Now, even if I got trained last week for how to treat heart decompensation, I’m pretty certain to miss one or more steps in the fairly complicated clinical guideline if I got a patient today and did not have a copy of the guideline at hand. So there is also the list of details problem, the same problem airline pilots solved with checklists. No matter how many times they do landings, they’re bound to forget some little fatal switch sooner or later if they don’t run through pre-landing checklists. (For an excellent treatment of this subject as applied to surgery, do read Atul Gawande’s “The Checklist Manifesto”.)

Of course there are clinical guidelines everywhere, and that’s also a problem. They’re everywhere, except where you need them. When I see a patient with diabetes, I can’t take a trip to the library to read up on the clinical guidelines, there’s no time for that. And it would scare the sh*t out of the patient. I can, though, find the guideline at the local government site, if it’s there, or over there at the other site, maybe, um, nope, that one is old, or here at the… darn… what was that URL again? or… darn it, I know diabetes, I don’t need it. So maybe next time Uncle Bob comes along, I’ll remember to do that retinogram I should have done but forgot about since I didn’t find the guideline and couldn’t remember exactly how often it should be done. It changes, you know.

Let’s assume you do find the right clinical guideline and follow it. In that case, you want to continue to follow that same guideline when the patient shows up the next time, even if he is seen by somebody else. But there is no way to signal in the medical notes which guideline you are following. You could, of course, just paste in the URL right there, which I often do. There are two problems, though:

  • The URL isn’t generally clickable due to our poor EHR implementations, and they wrap, making a mess that the average doctor isn’t likely to want to understand or use
  • Some clinical guideline sites have the bad taste to still use IFRAMEs, so there is no URL available to go to the right guideline

Sometimes I simply copy in text from the guideline right in to the field “planning” in the notes, but it’s ugly. And it will soon become unread history due to the scrolling nature of the medical record. In other words, nobody will read it anyway.

Finally, I want you to consider how much suffering and needless expense we incur by treating patients using methods that are 17 years out of date and even then forgetting every so many steps of that diagnosis or treatment because we do it all from memory. It’s scary, isn’t it? Imagine if airlines worked like this. Terrorists would have nothing to do.

References
  1. Medical Research: What’s it worth? Estimating the economic benefits from medical research in the UK. Wellcome Trust, Medical Research Council, The Academy of Medical Sciences. Briefing November 2008. Short form and long form.