Repeat Due to Pathology

Automated information systems only work if they actually inform and do so clearly. Too often, however, they create confusion. This was not what we had in mind when I and others created some the earliest automated information systems back in the 1980s, when the personal computer began its rapid and thorough takeover of the workplace.

Back then, I was starry-eyed, convinced that everything imaginable should be automated using computers. Unfortunately, I and my colleagues at the time rarely, if ever, questioned the merits of automation. We were having too much fun replacing old manual processes with new automated systems. We were rock stars! We were convinced that those new systems could only do good. My oh my, were we mistaken. Not everything benefits from automation, and even good candidates become counter-productive when they’re poorly designed. Choosing good candidates for automation and then building systems that do the job well takes time and care—two rare ingredients in a “move fast and break things” IT culture.

The most recent reminder of this problem arrived in the form of an email from my health plan yesterday. The email informed me that a new test result was available through the plan’s web-based information system, called MyChart. I assumed that the test result was related to the colonoscopy that I endured the previous week. To put things in perspective, the first time that I had a colonoscopy, the doctor perforated my colon, which landed me in the hospital facing potentially dire consequences. So, as you might imagine, I dread colonoscopies even more than most people.

When I opened the test result in MyChart, it was indeed related to my recent colonoscopy. Here’s what I found:

Other than the date, which matched the date of the procedure, nothing else in this so-called test result made sense to me.

  • What does “Colonoscopy Impression, External” mean? Nothing about the procedure was external.
  • Who is this person identified as “Historical Provider, MD”? My doctor had a name.
  • This was identified as a “Final result,” but I didn’t know that I was awaiting further results. Before leaving the doctor’s office, I thought I was given a full account of the doctor’s findings both verbally and in writing.
  • Most alarmingly, what does a “Your Value” of “repeat based on pathology” mean? Did I have to go through this again? Why? What was wrong?
  • And, to top it all off, I couldn’t tell how the “repeat based on pathology” value compared to the “Standard Range” (i.e., a normal result), because it was blank.

In a panic, I clicked on the “About this Test” icon in the upper-right corner, hoping for an explanation, which produced no results.

The stupidity of this automated system not only produced a panic, it also led me to contact an actual human to resolve the confusion. In other words, a system that was supposed to reduce the work of humans actually added to it, which happens all too often. The human that I contacted, a friendly woman named Beth, didn’t understand what “repeat based on pathology” meant any more than I did, but she was able to access a letter that was placed in the mail to me yesterday, which provided an answer. As it turns out, because a single polyp was found and removed during the procedure, I’m at greater risk than most people of future polyps that could become malignant, so I should have another colonoscopy in five years. What a relief.

Could the test result that was posted to MyChart have provided clear and useful information? Absolutely, but it didn’t, and this wasn’t the first time. I had a similar experience a few months ago while reviewing the results posted in MyChart of a lengthy blood panel. On that occasion, I had to get my doctor on the phone to interpret several obscure lab results.

Information technologies are not a panacea. They aren’t useful for everything, and when they are useful, they must be well designed. Otherwise, they complicate our lives.

2 Comments on “Repeat Due to Pathology”


By Pepe Vera. November 3rd, 2019 at 7:12 am

Stephen,

I completely understand what you are here writing about. As a long-time student of your precepts and books, I would say: is this just an output of a bad implemented liability system?

We are all too used to read this useless reports, unfortunately to me on a daily basis at work, and it occurred to me that it may be the case that the person adding information to the system via forms is just “filling the system”.

Whenever I deep dive into what was going on with the answers I am given via reports, I very usually find out that, eventually, a person was just filling a form, apparently as instructed in the past, and that person did not care whether the recipient of the information (here, that was you) is going to panic, misunderstand, understand or complain about it.

S/he just doesn’t care. It is not his job to care. His job was to fill the system. It is even possible that his part of the job could also be automated itself.

In the end, I see two issues here: one, as you stated in your post, the system must be created with time and care, nothing we can expect from the person who is usually in charge to allocate resources and time to such system.

And two, it is all too easy to, as a consequence of the point before, get a system that does not do what it should. This outputs a way of working where liabilities and responsibilities are neither clear not in need to clarification at all.

Oftentimes it is the case, at least to the best of my experience, that the system is created as incomplete in terms of its automation ability, that is, a person is actually filling a form on behalf of another person. That should be your doctor to name an example. So your doctor has the right data for the system but does not add the data herself, but another interface/person does. And that person cares much less about you than your doctor.

I personally call this behavior a defect in the liability system: the person in charge of delivering information to you is neither liable of it nor is she rewarded or assessed with that task in mind. She is only of filling the system, thus is never liable of anything that you feel or worry about afterwards. It was not her job!

These kind of outputs are, in my opinion, extremely hard to model and implement in a system that is created in, as you call it, “move fast and break things” IT trend.

I don’t know how these are solved. I am just understanding your feelings since it happens to me so much, also when dealing with medical information, which is a shame.

I really hope you get to receive reports from your health issues better in the future. When coming from medical sources, it should be certainly more regulated, they way we users are given personal and sensitive information.

Regards,

Pepe

By Dale Lehman. November 4th, 2019 at 4:58 pm

I think there are several, potentially interrelated, things illustrated by this example. Systems can be designed poorly due to poor work. I’m not sure this example is a good one for that – I’ve had MyChart records that were complete and accurate (though hard to interpret for a non-physician) and I’ve had ones like the above that provide virtually no information. To some extent it seems to depend on whether the provider has a compatible electronic health record system or not. Or perhaps whether they have the staff necessary to port the relevant information to MyChart. In either case you can say that the system was not well designed for at least some of the users. But it might not be intentional or even careless, depending on the extent to which this happens.

Other systems are intentionally designed poorly. I’m not sure it applies to the example above. A clearer example to me is many “unsubscribe” policies for email notifications. One, from a major financial information provider (Moody’s, not to name names) invites you to unsubscribe, but the system hangs up and does not let you unsubscribe. It might be carelessness, but its been that way for months. Nobody apparently has checked (or worse did check) because there never was an intention to have the automated system work. The poor incentives at work here are clear.

A third type of automated system design failure is one that works in devious ways – again due to misaligned incentives. Cable TV providers, online streaming services, and other information providers seems to commonly illustrate this. They automate things like seeing what entertainment they provide, and promoting what they wish you to see, but hiding details such as exactly how much you have to pay and what happens after their introductory offers end (what I call bait and switch offers).

I think all of these circumstances fit under the umbrella of poorly designed automated systems. And, they would be relatively easy to avoid if things were just tested better – particularly on real users. The ease of avoiding some of these failures makes me believe that many of these failures are, in fact, intentional, and due to the incentives to profit by misleading people. The MyChart example seems unlikely to fit those circumstances, however. What I can’t tell is whether it is a stupid system because the designers did a stupid job in general, or whether this particular example just “fell through the cracks” – as I said, I’ve had some good (and some not so good) experiences with MyChart systems.

Perhaps more to your point, Steve, is that these automated systems promise more than they deliver. They appear to offer “mass customization” but often fail to do so. As long as your system works the way they expected, and as long as you have the same understanding as the designer of the system, then they work fine. If you deviate from this in any way, they are no longer useful. Whether the resources necessary to design a better system are worth it in terms of less user confusion, panic, and steps necessary to clarify what the system is telling you – is an open question. My fear is that the answer is generally that it is not worth the resources – as long as “worth” is defined by the incentives of the system designer. Until they are held accountable (e.g., when you change medical providers because you don’t like the way their MyChart works), I don’t see them seeing much cost in a poorly designed system. Yes, you did need to call a real person and occupy their time, but most of the cost was not borne by the provider, and what cost they did incur they weigh against what would be required to make the system work better.

Leave a Reply