Embrace Complexity

We live in a complex world. Humans are complex. The natural systems that operate in our world are complex. The systems and technologies that we create are increasingly complex. Despite essential complexities, we prefer to see things simply. This preference, although understandable, is becoming ever more dangerous.

I promote simplicity, but a version that strives to explain complex matters simply, without oversimplifying. Healthy simplification attempts to express complex matters without compromising truth. This form of simplicity never dumbs information down. Embracing complexity is hard work. It is not the realm of lazy minds. This hard work is necessary, however, for we can do great harm when we make decisions based on overly simplified representations of complex matters.

We long for a simple world, but that world does not exist. In the early days of our species, we understood relatively little about our world, but enough to give us an evolutionary edge. As our ability to understand the world gradually developed over time, our ability to survive and thrive usually increased with it, but there are exceptions. The world has always been complex, but we are making it even more so. We create technologies that we cannot understand and, therefore, cannot control. This is careless, irresponsible, and downright scary. In fact, given the potential power of many modern technologies, this is suicidal. Nevertheless, we can derive hope from the fact that our brains are much more capable of handling complexity than we routinely demonstrate. This capability, however, can only be developed through hard work that will never be done until we are convinced of its importance and commit to the effort. Even if most people remain lazy, it is critical that those who influence the decisions that establish our path forward are committed to this work.

An entire discipline, called “systems thinking,” has emerged to address complexity. It strives to see systems holistically—how the parts relate to one another in complex ways to produce systems with outcomes that cannot be understood by looking at those parts independently (i.e., analytically). Chances are, you’ve never heard of systems thinking. (In case you’re interested, a wonderful book titled Thinking in Systems by Donella H. Meadows provides a great introduction to the field.)

It is also encouraging that a few organizations have emerged to encourage deeper, more complex thinking. Personally, I appreciate and support the work of The Union of Concerned Scientists and The Center for Inquiry, which both work hard to expose failures in overly simplified thinking. There are also courageous and disciplined thinkers—Richard Dawkins and Robert Reich come immediately to mind—who raise their voices to warn against these errors.

These disciplines, organizations, and individuals challenge us to embrace complexity. In so doing, they challenge us to embrace the only path that will lead to humanity’s survival. We humans are a grand experiment. We’ve accomplished so much in our brief time on this planet. It would be a shame to let laziness and carelessness end the experiment prematurely. I strongly recommend the following technology-related guideline for human survival:

Never create systems or technologies that are so complex that we cannot understand them.

Where there is no understanding, there is no control. When we create situations beyond our control, we put ourselves at risk. Make understanding a priority, even when it is difficult.

Take care,

20 Comments on “Embrace Complexity”

By Dale Lehman. January 2nd, 2018 at 8:18 pm

A post I can agree with – until the punchline. It raises questions that I think need to be answered. You say “never create systems or technologies that are so complex that we cannot understand them.” The questions are “who” should never create these and “who” is “we?” Let me elaborate. Are you saying that it is sufficient for the creator to understand them? Are you saying that we (meaning all or most of the public) must understand these technologies/systems?

I see dangers in the creator and the understandor (not a word, but perhaps it should be)not being the same person/entity. A creator who understands but the rest of us do not sounds dangerous. A creator who does not understand also sounds dangerous. On the other hand, requiring that both the creator and the rest of us to all equally understand, while admirable, does not strike me as feasible (or even desirable since I don’t really want to give up all the technologies I don’t really understand – cars, computers, electricity, fire, etc.).

I will venture to guess that you mean that it is necessary (but not sufficient) that the creator have sufficient understanding of systems/technologies that they create. Then, there must be a requirement that some threshold of understanding exist in the populations that will be impacted by these creations. How would you state the required threshold?

On a related tangent, I am reminded of an old book, “Ecology and the Politics of Scarcity,” by William Ophuls (if my memory is correct). He makes the case that environmental stewardship and democracy are incompatible. I found the argument persuasive, and I think there is a similar argument here. Democracy (an ill-defined term, but let’s think of it as majority rule) does not sit well with your guideline. Majorities can easily not understand technologies/systems that are beneficial and can equally believe they understand ones that are not. Belief is not the same as understanding, but that is where democracy and your guideline conflict.

So, can you clarify who must understand, and how much they must understand, in order to justify creation, and by whom such creation may take place?

By John. January 2nd, 2018 at 9:13 pm

Respectfully, I think you are dramatically underestimating the complexity involved. The worldwide economy is a system of unfathomable complexity. Billions of people each with their own rapidly evolving preferences and behavior combine to create a system that is completely impossible to understand completely. Attempts to replace this inherently complex system with one that purports to be “understandable” are doomed to failure and ultimately do far more harm by creating the illusion of control and understanding. Perhaps I am misunderstanding your point, but I see this call for understanding of inherent complexity as an open invitation for politicians to use such rhetoric as a foil for gaining more power.

By stephenfew. January 2nd, 2018 at 10:24 pm


I am saying that those who create and promote technologies should understand them. I’m also saying that those who create and promote technologies should be bound to a high standard of ethics.

I do not propose a general threshold of understanding that must be acquired by those who use technologies. However, I do think that it is risky to embrace technologies that are understood by only a few people. Those few people might not be trustworthy. Also, if their understanding is not passed on to others, our ability to control those technologies will be lost when they are no longer around. Having a few experts around to keep one another in check is always useful. Expertise will never be democratic.

By stephenfew. January 2nd, 2018 at 10:39 pm


I am not underestimating the complexity of the technologies and systems that we create. They are only as complex as we make them, either intentionally or unintentionally through lack of care. Natural systems (i.e., systems that we did not create) are a different matter. They are as complex as they are.

I am arguing in direct opposition to the rhetoric of politicians. Politicians almost always oversimplify matters. As such, they mislead us. They usually do so intentionally. Unfortunately, in our current political climate here in the United States, someone who embraced the complexities of issues and attempted to explain them honestly would never get elected. This needs to change. Politicians should either be experts in those matters that they seek to influence or, when they are not, should rely on the services of experts. Expertise has been devalued in our current culture, much to my dismay. This needs to change. The election of Donald Trump is a symptom of the current distrust of expertise and rejection of complexity.

By Dale Lehman. January 3rd, 2018 at 1:55 pm

Genetic engineering, algorithmic decision making, climate modification,….
All examples of what you are talking about. “We” clearly do not understand the systems and technology enough to satisfy your guideline. I hesitate to say they should not be developed, although a case can be made for that. Certainly, a strong case could be made for development to be closely monitored and the pace and scale to be moderated. But there are two problems: (i) a case can be made for not inhibiting further development (which many people will agree with, even if not you and me), and (ii) it is clear that a number of people will further develop these technologies without worrying about the consequences that you raise. This presents the dilemma: humans have not developed institutions or personal ethics to exert the kind of control that is required. In fact, the only institutions we are aware of that exert that kind of control we find reprehensible (dictatorships, etc.).

By stephenfew. January 3rd, 2018 at 4:56 pm


You’ve put you finger on one of the great challenges that we face and always have. How do we govern? With consequences as harmful as those that our technologies and systems now potentially introduce, we must find a way to encourage good responsible behavior and prevent the opposite, but all of our solutions so far come with potential downsides. Unfortunately, I don’t have much to offer on this front, for it’s definitely not my area of expertise.

By Dale Lehman. January 4th, 2018 at 5:44 pm

Then, of course, there is this week’s big news: https://www.nytimes.com/2018/01/03/business/computer-flaws.html?hp&action=click&pgtype=Homepage&clickSource=story-heading&module=first-column-region&region=top-news&WT.nav=top-news&_r=0. You could say that we don’t understand computer technology in the sense that you are using in the guideline you recommend. That may well be true. But does that mean we should not have permitted computer technology to be developed? Deployed? It is even hard to imagine what type of institutions could be developed to ensure that computers would not be (mis)used in the ways these flaws in the ecosystem work.

By stephenfew. January 4th, 2018 at 6:18 pm


We’ve developed information technologies carelessly. Having worked in IT for many years, I’ve observed how little care is taken in developing these technologies. The goal is always “Get the product to market as fast as you can.” Consequently, products are developed with little forethought and released before they’re ready. We put up with crappy products, which sometimes contain flaws that are not only costly but also dangerous.

I’m not arguing that we should have refrained from developing information technologies, but rather that we should have developed them with greaer care. Our willingness to tolerate crap has established a bad precedent. This needs to change. IT companies won’t make this change on their own initiative. If we made them truly responsible for the damage that their poor products create, however, their approach would change.

By Johan Lammens. January 8th, 2018 at 9:01 am

Can you provide some examples of “systems or technologies that are so complex that we cannot understand them”? Presumably their creators do understand them? What exactly does “understanding” a system or technology mean?


By stephenfew. January 8th, 2018 at 6:10 pm


Chances are, you’ve directly encountered many problems related to overly complicated, not-entirely-understood technologies. For example, when you experience buggy behavior when using software, including ubiquitous products such as Microsoft Word or Excel, this probably occurred, in part, because no one at Microsoft completely understands these products. The code is too complicated. This isn’t because it must be too complicated, but merely because it was allowed to become so over time. Many of our information technologies are filled with kluges: cobbled together, poorly designed, unnecessarily complicated bits of code. Microsoft Office products are notoriously kludgy.

We think of Apple as being more design and quality oriented than most technology companies, but my iPhone is buggy as hell. When in vibrate mode, it buzzes randomly, even though I allow only phone calls to issue an alert. My iPhone audibly rings at times even when I’ve turned the ringer off with the switch, which is terribly annoying when it happens during the meditative portion of a yoga class. It is unlikely that anyone at Apple could tell me why this is happening.

Let me introduce you to a book that describes this problem thoroughly: Overcomplicated: Technology at the Limits of Comprehension, by complexity scientist Samual Arbesman. Here are a few lines from the introduction:

On July 8, 2015, as I was in the midst of working on this book, United Airlines suffered a computer problem and grounded its planes. That same day, the New York Stock Exchange halted trading when its system stopped working properly. The Wall Street Journal’s website went down. People went out of their minds. No one knew what was going on. Twitter was bedlam as people speculated about cyberattacks from such sources as China and Anonymous.

But these events do not see to have been the result of a coordinated cyberattack. The culprit appears more likely to have been a lot of buggy software that no one fully grasped. As one security expert stated in response to the day’s events, “These are incredibly complicated systems. There are lots and lots of failure modes that are not thoroughly understood.” (page 1)

A few pages later, Arbesman writes:

Next time, the results of our failure to understand might not be as trivial as a frustrated Wall Street Journal reader being unable to access an article at the time of her choosing. The glitches could be in the power grid, in banking systems, or even in our medical technologies, and they will not go away on their own. We ignore them at our peril. (page 6)

Arbesman does a good job of describing the complexities of moderns systems and technologies, but his solution to the problem is quite different from mine. He believes that we must accept the fact that our systems and technologies are no longer be fully understandable. We shouldn’t hold off on creating them until we understand them, but should instead do our best to understand them once they exist by using research methods that are similar to those of scientists studying complex biological systems. Unlike Arbesman, I believe that we can and should take responsibility for creating and releasing into the wild systems and technologies that we cannot understand and control.

By John. January 11th, 2018 at 11:51 pm

Stephen, I again think you are underestimating complexity. Modern software development is stunningly, irreducibly complex. I have dedicated 20 years of my professional life towards increasing the quality of software. It is my singular obsession. But the idea that there is some way to fully understand the entire complexity of all the interacting services, hardware, software and operating systems is simply… I’m searching for a polite word. Naive? I don’t know when you worked in IT or in what exact role, and I could not agree MORE with your call for increased quality and your condemnation of the cavalier attitude towards quality the industry has taken, but to say the systems must be understood fully will simply result in you being written off as a blow hard who doesn’t understand the industry or technology at all.
Software engineering is as much about communication and human frailties as it is math and logic. There isn’t a single working product of any significance that isn’t dependent on hundreds of other factors (libraries, operating systems, hardware, services, etc), ALL of which would have to be understood completely in order for the project utilizing them to be understood completely. The pace of change and improvement in software would grind nearly to a halt in your “understood 100%” world, at an absolutely staggering economic cost.
I really hope you find better ways of expressing this, and more realistic goals for society and for technology. What you are saying right now is simply going to be completely ignored, and I think that would be a shame.

By stephenfew. January 12th, 2018 at 1:19 am


It would indeed be a shame if people ignored the concern that I expressed in this blog post. Doing so could result in the extinction of our species. I’ll ignore your claim that I’m a naïve blowhard and invite you to make your case. So far, you have expressed your disagreement, but you haven’t provided any actual evidence that my position is naïve.

Let me begin by answering your question regarding my experience. I began working in IT in 1983 and have continued to do so ever since. Like many of those who began their IT careers when I did, when the PC was brand new, my work has spanned many roles. Those roles include software engineering. During my IT career, I have never created a system or technology that I could not understand. That’s intentional.

I understand that many of the systems and technologies that we have created are extremely complex. Where you and I differ is that I believe these systems and technologies could have all been created in a manner that would allow their complexities to be understood. I’m not arguing that in all cases a single individual could understand the complexities of the entire system, but that these systems could be created in a way that could be understood by humans, sometimes involving multiple people who understand different aspects of the system and collaborate to create an understanding of the interactions between those different aspects.

Now, I’ll ask you to explain your position. First, you know who I am and have some sense of my experience and credibility, but I don’t know who you are. What is your name, who do you work for, and what is your professional experience? Second, have you ever personally developed a system that you could not understand? If so, is that system so complex that it could not possibly be understood by others? If so, please describe this system for us. If not, can you identify any specific existing system or technology that could not possibly be understood if it were well designed?

You expressed your concern that “the pace of change and improvement in software would grind nearly to a halt in your ‘understood 100%’ world.” I disagree. The pace would certainly slow down, and it should, but it would not grind to a halt. It if did, our systems and technologies would work much better than they do. They would also pose considerably less risk.

My “goals for society and for technology” are that our species and our world continue to survive and flourish. When you compare the potential benefits of our systems and technologies, which are great, to the potential dangers, which are increasing at a rapid rate, it is difficult to argue, as you seem to be doing, that we should continue at our current pace without the safeguards that can only come with understanding.

By John Long. January 12th, 2018 at 2:19 am

I didn’t mean to say I thought you were a naive blowhard, personally! I was presenting what I suspect others in technology might think of your warnings, without the background of respect that I have for you.

As to my experience, I have worked at Adobe (on Photoshop), Microsoft (mostly on TV software) and currently eBay. In all of those roles I have been working on systems for testing, measuring and visualizing software quality across it’s many facets. I have worked with brilliant, disciplined and conscientious developers and I have worked with… others.

I see we have actually had a very similar conversation previously, back in October of 2016. But there are some new questions here, so let me try and answer them.

I have indeed created systems that I did not fully understand, as have you. I have used libraries that I didn’t look at and personally verify. I have written code that uses compilers I did not write myself. I have called APIs to services I have nearly zero insight into, other than the external documentation provided by the people who provided it. I have run my software on operating systems I did not create and hardware I did not design or manufacture.

One error I see in your assumptions is that it is somehow possible to design perfect interactions between systems. Even if I wrote a piece of software that I did in fact understand completely, I would have to communicate to another person how to utilize my software, how to call my api or use my library. And, being human, I would do so imperfectly. These interaction points use human language, which is inherently imprecise, to explain their purpose and their effect. Even code within a single project has to use human language to express intent. As such, there WILL be misunderstandings and misuses within any significant system. And those mistakes will ripple upwards throughout the overall system.

My current project is a tool that analyses other programmers code, runs tests on it and reports back whether the code is better or worse than it was before proposed changes. It uses a combination of static analysis of code, running tests on the code and analyzing the usefulness of those tests. It can block code from being checked in that will make things worse, rather than better.

Even setting aside external dependencies, I cannot say I understand my code perfectly. There are still too many interactions and possible inputs for me to confidently say how it will behave in any and every situation. I try to write tests for all the interactions I can think of. I am not so arrogant as to believe that my great mind has searched out all possibilities and covered each of them.

However, I would say the code I write is very well ordered, predictable and easy to understand. I candescribe the system clearly and precisely. I test everything. But my understanding of the complexity of software and the limitations of the human mind force me to admit I do not understand it as much as I could.

Our disagreement is all a bit frustrating, as we should be enthusiastic allies. I literally spend all of my time at work either directly advocating for higher quality or building systems that will enforce and improve software quality. I am on the forefront of actually DOING something about this problem. But if I were to start saying that we could have bug free, completely understood software, people would stop listening to everything else I was saying. Saying “the level of quality and understanding in the systems we depend on is woefully inadequate and a threat to our very existence” is something I can get behind, even shout from the rooftops myself. Saying “We should not write anything unless we understand it completely” is, from my experience, the same as simply saying “We should not write anything”. The entire industry would need to spend the next ten years doing nothing but rewriting existing systems from the ground up even to get to the level of certainty that I have for the software I’ve written, much less the perfect understanding you seem to insist on. If that isn’t grinding to a halt, if it isn’t an outrageously expensive undertaking, what would be?

By stephenfew. January 12th, 2018 at 4:55 pm


I appreciate your response, but you haven’t actually addressed the topic. Perhaps I haven’t explained my position clearly enough. If I were claiming that software should not be released until it is guaranteed to work perfectly, I would indeed be naïve. While I do believe that software could and should be far less buggy when it is released than is currently the case, I do not believe that it can be perfect. What I am saying is that the systems and technologies that we create should not be so complex that they exceed our ability to understand them. You have not provided any examples of systems or technologies that require inherent complexity that exceeds the limits of our understanding. If you create something that is so complex that it cannot be understood, you potentially create something that cannot be controlled. That is dangerous. Today, given how much we have come to rely on the systems and technologies that we create and the amount of damage that can be caused if they behave unpredictably because we do not understand them, we must take care to limit what we create to what we can understand.

You might find it useful to read the book, Overcomplicated, by Samual Arbesman, which I mentioned previously. Given your experience, I would be interested in hearing your response to Arbesman’s argument that we cannot possibly understand some of the systems and technooigies that we’re currently creating and that we must live with this fact, doing our best to study these systems once they’re created, similar to the way that we study complex natural systems today.

By stephenfew. January 12th, 2018 at 5:38 pm

To extend my response to John above, I’m proposing that, when creating a system or a technology, we should always ask the question, “Can we understand this?” In other words, “Does the complexity of this system or technology fall within the limits of human understanding?” If the answer is “No, it does not,” then we should not develop and release it into the wild unless we can do find a way to do so that does fall within the scope of human understanding. If the answer is “Yes, we can potentially understand it, but we haven’t taken time to do so,” we should take the time.

To summarize my position, if we can understand the complexities of the systems and technologies that we create, then we should. If we cannot understand the complexities of the systems and technologies that we create, then we should not create them. This is a value position. It is based on my belief that the interestes of humanity should be protected from harm that can be caused by the systems and technologies that we create.

As you have perhaps noticed, this position is challenged by the potential of superintelligent AI. Should we develop an AI with intelligence that exceeds our own, if this turns out to be possible. A few of the folks who understand what’s going on in the field of AI research and development are grappling with this question. I applaud them. Those who are concerned with the future of humanity argue that we should only proceed if we find a way to guaranty that the interests of a superintelligent AI will always align without humanity’s interests. Clearly, human understanding is severely limited and we could potentially benefit from understanding that exceeds our own. The risk, however, that an AI with much greater understanding than our own would not value and always align itself with humanity’s interests is great. It certainly wouldn’t align with our interests unless controls were built into it from the beginning, but are such controls even possible when dealing with a superintelligent AI? We’re in new territory here. This isn’t a trivial matter. I find it hard to imagine how we could possibly guaranty an outcome that aligns with humanity’s interests. I hope that I’m wrong, but if I’m not, I hope that we are smart enough to apply the reins to AI. This would be difficult, because, in the short term, whoever creates a superintelligent AI would rule the world, which is a powerful incentive given the selfishness of many humans. Whoever comes out on top, however, probably would not remain there long enough to cash the first check. In no time at all, a superintelligent AI that wasn’t controlled would disregard not just humanity’s interests in general, but its creator’s interests in particular as well.

My concerns about the development of systems and technologies that are so complex that they exceed human understanding are highly influenced by my concerns about AI in particular.

By Daniel Zvinca. January 14th, 2018 at 7:57 pm

I have mixed feelings about this subject. I think that the complexity we embrace has different forms.

John underlined an unavoidable aspect of complexity encountered during the building process of software tools. Such a process usually involves using existing libraries with certain requirements/functionality. These libraries become useful using a certain interface (API) that hides the complexity of included algorithms developed and improved over years by specialized teams. As somebody involved in software development for many years, I always need to research similar libraries that have the required functionality and choose the right one balancing several tradeoffs. There are performance criteria like speed, memory usage, wasted CPU cycles, but I also look into active development and maintenance matter as well as the reliability score earned in years of usage. I choose technology without being fully aware how it was built and in most of cases I don’t even exploit it to its full potential. I obviously need to understand how the library works for my requirements, but I have limited knowledge of library internals.

Many organizations acquire software packages for specific needs. Even with the help of skilled professionals or well trained employees the software is rarely used to its full capacity. Many features are labeled as complex and therefore, unnecessary, even if they were properly implemented in other environments. Complexity is, in many cases, an audience related metric. When this is the case, functionality that exceeds the competence of the user should be simply avoided.

I agree that software creators should take full responsibility for bugs or design flaws. I also agree that critical technologies require strict quality control before they are released. I also cannot deny that most of the technologies have an increased perceptual complexity with each version. But in most of cases they have new features, they perform better, they are more stable and therefore necessary. If we do the upgrades on regular basis and we are familiar with the changes between versions the learning process is not that difficult. I can’t deny that once in a while I fear that we allowed technologies to control too much part of our life, but I am not exactly afraid of The Judgement Day.

By stephenfew. January 14th, 2018 at 10:38 pm

John and Daniel,

The fact that software developers use code libraries that they do not fully understand because they have not examined the code themselves isn’t actually relevant to this particular concern that I’m expressing about complexity. These code libraries were developed by people who do understand them, assuming that they did their jobs properly. I don’t expect a software developer to know any more about these libraries other than how to interact with them and what to expect given particular inputs. When you employ these libraries, you do so as a user. As such, it isn’t necessary that you have an intimate knowledge of the library’s contents any more than a driver of a car must understand its internal mechanisms.

I’m concerned about systems and technologies that are too complex for humans to understand.

P.S. to Daniel: Do you really think that “in most cases” new software releases are “more stable”? I’d like to believe that this is true, but my experience suggests that it often is not.

By Daniel Zvinca. January 15th, 2018 at 3:10 am

Obviously, my profession makes me get used to technologies I do not fully understand. Your example with cars is just one of the many examples of goods we just use on daily basis with complex internals we are not aware of, but useful nevertheless. I think that many of us ceased to think about complexity of things a long time ago, as long as it does not affect us. They are many products we use in the limit of our abilities and when they become too complex for us we just avoid them. If products will become too complex to be easily build, maintained or controlled, they will become a commercial fail and eventually disappear.

Most of the technologies have two types of upgrades, usually released in alternative fashion. They are upgrades that target stability and upgrades that brings novelty. The last category usually introduces new bugs, but is very likely that they get fixed in upcoming versions. The novelties they fail to prove useful are simply removed or ignored by users. Android and Windows, two operating systems I use the most are much better now then in the past. When I do not feel comfortable with new features or apps or services I simply ignore them.

By Nate. January 16th, 2018 at 6:59 pm

John Gall’s book Systemantics: The Systems Bible is a funny but true book about systems and their failures. It’s full of excellent aphorisms, many of which recognize that we really have much less control over systems then we think we do. It also provides many examples of just how the law of unintended consequences is ignored over and over again.

A recent example of system antics is the broadcast of an imminent ballistic missile attack on Hawaii.

By Dale Lehman. January 25th, 2018 at 2:23 pm

I find this whole exchange about software complexity just underscores Stephen’s original point. Surely the world is more an more complex – not just software, but virtually all technologies, including our economic system (which I view as a form of technology). Just as surely, there are benefits associated with this complexity. But it seems as though those that are intimately involved in these technological developments cannot imagine that there is anything other than progress. More capable, more stable, built upon prior libraries, etc etc. While all these advances are true, they do come with a dark side, and that dark side is increasingly under-appreciated and denied. It is almost as if we (as a species) cannot imagine that there are any technological “advances” that should or could be stopped (or slowed) due to our failure to fully understand their consequences. It is this failure that I find disturbing and that I believe Stephen (my interpretation – I’ll leave it to Stephen to clarify if he wants) is pointing to. We, as a species, are rapidly losing our ability (more properly, failing to gain an ability) to control our own technologies.

Leave a Reply