Repeat Due to Pathology

Automated information systems only work if they actually inform and do so clearly. Too often, however, they create confusion. This was not what we had in mind when I and others created some the earliest automated information systems back in the 1980s, when the personal computer began its rapid and thorough takeover of the workplace.

Back then, I was starry-eyed, convinced that everything imaginable should be automated using computers. Unfortunately, I and my colleagues at the time rarely, if ever, questioned the merits of automation. We were having too much fun replacing old manual processes with new automated systems. We were rock stars! We were convinced that those new systems could only do good. My oh my, were we mistaken. Not everything benefits from automation, and even good candidates become counter-productive when they’re poorly designed. Choosing good candidates for automation and then building systems that do the job well takes time and care—two rare ingredients in a “move fast and break things” IT culture.

The most recent reminder of this problem arrived in the form of an email from my health plan yesterday. The email informed me that a new test result was available through the plan’s web-based information system, called MyChart. I assumed that the test result was related to the colonoscopy that I endured the previous week. To put things in perspective, the first time that I had a colonoscopy, the doctor perforated my colon, which landed me in the hospital facing potentially dire consequences. So, as you might imagine, I dread colonoscopies even more than most people.

When I opened the test result in MyChart, it was indeed related to my recent colonoscopy. Here’s what I found:

Other than the date, which matched the date of the procedure, nothing else in this so-called test result made sense to me.

  • What does “Colonoscopy Impression, External” mean? Nothing about the procedure was external.
  • Who is this person identified as “Historical Provider, MD”? My doctor had a name.
  • This was identified as a “Final result,” but I didn’t know that I was awaiting further results. Before leaving the doctor’s office, I thought I was given a full account of the doctor’s findings both verbally and in writing.
  • Most alarmingly, what does a “Your Value” of “repeat based on pathology” mean? Did I have to go through this again? Why? What was wrong?
  • And, to top it all off, I couldn’t tell how the “repeat based on pathology” value compared to the “Standard Range” (i.e., a normal result), because it was blank.

In a panic, I clicked on the “About this Test” icon in the upper-right corner, hoping for an explanation, which produced no results.

The stupidity of this automated system not only produced a panic, it also led me to contact an actual human to resolve the confusion. In other words, a system that was supposed to reduce the work of humans actually added to it, which happens all too often. The human that I contacted, a friendly woman named Beth, didn’t understand what “repeat based on pathology” meant any more than I did, but she was able to access a letter that was placed in the mail to me yesterday, which provided an answer. As it turns out, because a single polyp was found and removed during the procedure, I’m at greater risk than most people of future polyps that could become malignant, so I should have another colonoscopy in five years. What a relief.

Could the test result that was posted to MyChart have provided clear and useful information? Absolutely, but it didn’t, and this wasn’t the first time. I had a similar experience a few months ago while reviewing the results posted in MyChart of a lengthy blood panel. On that occasion, I had to get my doctor on the phone to interpret several obscure lab results.

Information technologies are not a panacea. They aren’t useful for everything, and when they are useful, they must be well designed. Otherwise, they complicate our lives.

6 Comments on “Repeat Due to Pathology”

By Pepe Vera. November 3rd, 2019 at 7:12 am


I completely understand what you are here writing about. As a long-time student of your precepts and books, I would say: is this just an output of a bad implemented liability system?

We are all too used to read this useless reports, unfortunately to me on a daily basis at work, and it occurred to me that it may be the case that the person adding information to the system via forms is just “filling the system”.

Whenever I deep dive into what was going on with the answers I am given via reports, I very usually find out that, eventually, a person was just filling a form, apparently as instructed in the past, and that person did not care whether the recipient of the information (here, that was you) is going to panic, misunderstand, understand or complain about it.

S/he just doesn’t care. It is not his job to care. His job was to fill the system. It is even possible that his part of the job could also be automated itself.

In the end, I see two issues here: one, as you stated in your post, the system must be created with time and care, nothing we can expect from the person who is usually in charge to allocate resources and time to such system.

And two, it is all too easy to, as a consequence of the point before, get a system that does not do what it should. This outputs a way of working where liabilities and responsibilities are neither clear not in need to clarification at all.

Oftentimes it is the case, at least to the best of my experience, that the system is created as incomplete in terms of its automation ability, that is, a person is actually filling a form on behalf of another person. That should be your doctor to name an example. So your doctor has the right data for the system but does not add the data herself, but another interface/person does. And that person cares much less about you than your doctor.

I personally call this behavior a defect in the liability system: the person in charge of delivering information to you is neither liable of it nor is she rewarded or assessed with that task in mind. She is only of filling the system, thus is never liable of anything that you feel or worry about afterwards. It was not her job!

These kind of outputs are, in my opinion, extremely hard to model and implement in a system that is created in, as you call it, “move fast and break things” IT trend.

I don’t know how these are solved. I am just understanding your feelings since it happens to me so much, also when dealing with medical information, which is a shame.

I really hope you get to receive reports from your health issues better in the future. When coming from medical sources, it should be certainly more regulated, they way we users are given personal and sensitive information.



By Dale Lehman. November 4th, 2019 at 4:58 pm

I think there are several, potentially interrelated, things illustrated by this example. Systems can be designed poorly due to poor work. I’m not sure this example is a good one for that – I’ve had MyChart records that were complete and accurate (though hard to interpret for a non-physician) and I’ve had ones like the above that provide virtually no information. To some extent it seems to depend on whether the provider has a compatible electronic health record system or not. Or perhaps whether they have the staff necessary to port the relevant information to MyChart. In either case you can say that the system was not well designed for at least some of the users. But it might not be intentional or even careless, depending on the extent to which this happens.

Other systems are intentionally designed poorly. I’m not sure it applies to the example above. A clearer example to me is many “unsubscribe” policies for email notifications. One, from a major financial information provider (Moody’s, not to name names) invites you to unsubscribe, but the system hangs up and does not let you unsubscribe. It might be carelessness, but its been that way for months. Nobody apparently has checked (or worse did check) because there never was an intention to have the automated system work. The poor incentives at work here are clear.

A third type of automated system design failure is one that works in devious ways – again due to misaligned incentives. Cable TV providers, online streaming services, and other information providers seems to commonly illustrate this. They automate things like seeing what entertainment they provide, and promoting what they wish you to see, but hiding details such as exactly how much you have to pay and what happens after their introductory offers end (what I call bait and switch offers).

I think all of these circumstances fit under the umbrella of poorly designed automated systems. And, they would be relatively easy to avoid if things were just tested better – particularly on real users. The ease of avoiding some of these failures makes me believe that many of these failures are, in fact, intentional, and due to the incentives to profit by misleading people. The MyChart example seems unlikely to fit those circumstances, however. What I can’t tell is whether it is a stupid system because the designers did a stupid job in general, or whether this particular example just “fell through the cracks” – as I said, I’ve had some good (and some not so good) experiences with MyChart systems.

Perhaps more to your point, Steve, is that these automated systems promise more than they deliver. They appear to offer “mass customization” but often fail to do so. As long as your system works the way they expected, and as long as you have the same understanding as the designer of the system, then they work fine. If you deviate from this in any way, they are no longer useful. Whether the resources necessary to design a better system are worth it in terms of less user confusion, panic, and steps necessary to clarify what the system is telling you – is an open question. My fear is that the answer is generally that it is not worth the resources – as long as “worth” is defined by the incentives of the system designer. Until they are held accountable (e.g., when you change medical providers because you don’t like the way their MyChart works), I don’t see them seeing much cost in a poorly designed system. Yes, you did need to call a real person and occupy their time, but most of the cost was not borne by the provider, and what cost they did incur they weigh against what would be required to make the system work better.

By Dale Lehman. December 4th, 2019 at 3:50 pm

Here are two more data points, just encountered in the last day:
1. My valet checked bag got taken by a different air traveler. The automated lost luggage system said it could not be located – and this was true 24 hours after I know that passenger had returned it to the airline. The American Airlines automated system still said it could not be located, but when I called and spoke to a real person they said it had been found and was on its way to me. They provided the classic line “The system doesn’t necessarily update.”
2. I renewed a medicine prescription at Walmart – totally automated interaction that worked perfectly.

These are 2 large companies with ample resources to design systems that work – and, seemingly with a commercial interest in doing so. Yet in one case it worked, and in the other, it did not. One possibility (which I can’t dismiss) is that the airline really doesn’t want a system to work well. If valet checking of bags does not work well, perhaps I will pay to check more baggage. The pharmacy has not such conflict of interest. Another possibility is that the unionized airline labor intentionally prevents the automated system from working well in order to protect jobs (seems far-fetched to me, but I can’t totally discount it). The final possibility is that one company designed their system well and the other did not. Since I was trained as an economist, I usually believe that competition (which exists, though it is less robust with airlines than I might wish) would eliminate ineffective systems fairly quickly.

The airline example begs for an explanation. After all, FedEx, UPS, and USPS all provide automated package tracking – which, in my experience, works pretty well. The packages are bar coded and scanned at every step, so the systems show where things are. Why isn’t the air baggage system like that?

By stephenfew. December 4th, 2019 at 4:10 pm


Actually, I believe that the airline baggage tracking system does work much as the package tracking systems of delivery services such as FedEx. Bags are tagged with codes that are scanned at various points and stages during transit. Both baggage and package delivery systems occasionally fail for various reasons. Have you never had a package delivered to you that was meant for a neighbor? I’ve had this experience several times. It appears that in the case of your bag, the data about its location was logged into the system once the bag was recovered, but not all parts of the system were updated. When systems become overly complicated (e.g., with too many places where the same data is stored and too many ways to access that data), they become highly prone to failure. Most of the errors in systems like these, however, could be resolved through better design. It is often the case, however, that the people and organizations that are responsible for systems are excessively focused on keeping costs down rather than on taking the time that’s required to build good systems. They accept a certain level of error acceptable because a better system would cost more. Profitability, rather than good products and services, drive the system.

By Dale Lehman. December 4th, 2019 at 4:45 pm

That doesn’t quite explain it for me. Presumably, Walmart is also profit-driven and their prescription renewal works the way it should. The airline system did not – exceptions will always occur (as you say, they happen with package deliveries), but when I am told that the system does not update and they have a different system that does update, then it seems to be a design flaw. I find it hard to attribute this to the airline being focused on cost reduction – after all, just as in your example above, I end up calling a person and occupying their time to look into their system to retrieve the information that should have been in the system I had access to. So, my explanation possibilities include:
1. intentional poor design to drive me to pay for checked baggage
2. employee intervention intended to prevent system from working
3. poor design, just due to inferior personnel/processes
4. lack of competitive pressure (we only rarely have a real choice of airlines)

The profit motive is not on my list. Even #3 I am reluctant to subscribe to (possibly just my economics training blinding me to reality).

By stephenfew. December 4th, 2019 at 4:53 pm


I have no idea why the baggage system didn’t work in your case. I’m merely suggesting that failures like the one that you described often occur because organizations aren’t willing to invest the time and effort into making the system better because they don’t believe it’s cost-justified from a profit-oriented perspective.

Leave a Reply