Framing AI as Life

George Lakoff introduced the concept of “framing” in his book Metaphors We Live By. The terms and metaphors that we use to describe things serve as frames that influence our perceptions and our values. In his book, Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark frames future artificial intelligence (AI) as life.

Before proceeding, I should say that I appreciate much of Tegmark’s work. He is one of the few people involved in AI who are approaching the work thoughtfully and carefully. He is striving to safeguard AI development to support the interests of humanity. For this, I am immensely grateful. I believe that framing future AI as life, however, is inappropriate and at odds with the interests of humanity.

The version metaphor (1.0, 2.0, etc.), borrowed from the realm of software development, has been used in recent years to describe new stages in the development of many things. You’re probably familiar with the “Web 2.0” metaphor that Tim O’Reilly introduced several years ago. As the title if his book suggests, Tegmark refers to an imagined future of machines with general intelligence that matches or surpasses our own as “Life 3.0.” How could computers, however intelligent or even sentient, be classified as a new version of life? This is only possible by redefining what we mean by life. Here’s Tegmark’s definition:

Let’s define life very broadly, simply as a process that can retain its complexity and replicate…In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

According to Tegmark, Life 1.0 consisted of all biological organisms prior to humans. It was entirely governed by its DNA. Life 2.0 arose in humans as the ability to alter brain function. Our brains can be rewired to adapt and improve, free from the strict confines of genetic determinism. Life 1.0 was completely determined by its “hardware” (biology only), but Life 2.0 introduced the ability to rewrite its “software” (biology plus culture). Life 3.0, as Tegmark imagines it, will introduce the ability to go beyond rewriting its software to redesigning its hardware as well, resulting in unlimited adaptability. As he frames it, life proceeds from the biological (1.0) to the cultural (2.0) and eventually to the technological (3.0).

Notice how this frame ignores fundamental differences between organisms and machines, and in so doing alters the definition of life. According to the definition that you’ll find in dictionaries, plants and animals—organic entities—are alive; rocks, steel girders, and machines—inorganic entities—are not alive. A computer program that can spawn a copy of itself and place it on improved hardware might correctly be deemed powerful, but not alive.

Why does Tegmark argue that intelligent machines of the future would constitute life? He gives a hint when he writes,

The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species that we’ve encountered so far, let’s instead define life very broadly…

Indeed, definitions are often controversial when we scrutinize them deeply. This is because concepts—the boundaries that we create to group and separate things in our efforts to make sense of the world—are always somewhat arbitrary, but these concepts make abstract thinking and communication possible. Responding to the complexity of definitions by excessively broadening them undermines their usefulness.

Tegmarks seems to be concerned that we would only value and embrace future AI if we classified it as living. Contrary to his concern, maintaining our existing definition of life would not prevent us from discovering new forms in the future. We can imagine and could certainly welcome biological organisms that are quite different from those that are already familiar. It is true, however, that keeping the definition of life firmly tied to biology would certainly and appropriately lead us to classify some newly discovered entities as something other than life. This needn’t concern Tegmark, for we value much that isn’t alive and devalue much that is alive. If super-intelligent AIs ever come into existence, we should think of them as similar to us in some ways and different in others, which is sensible, and we should value them to the degree that they are beneficial, not to the degree to which they qualify as life.

You might think that this is much ado about nothing. Why should we care if the definition of life is stretched to include machines? I care for two reasons: 1) in general, we should create and revise definitions more thoughtfully, and 2) specific to AI, we should recognize that machines are different from us in fundamental ways. To the first point, concepts, encapsulated in definitions, form our perceptions, and how we perceive things largely determines the quality of our lives and the utility of our decisions. To the second point, we dare not forget that the interests of a super-intelligent AI would be very different from our own. Recognizing the ways in which these potential machines of the future would be different from us will serve as a critical reminder that we must approach their development with care.

Tegmark states in the title of his book’s first chapter that AI is “the most important conversation of our time.” This is probably not the most important conversation of our time, but it is certainly important. I’m sharing my concerns as a part of this conversation. If we ever manage to equip computers with intelligence that equals our own, their faster processing speeds and greater storage capacities will enable them to rapidly achieve a level of intelligence that leaves us in the dust. That might justify the designation “Intelligence 3.0,” but not “Life 3.0.” I suggest that we frame super-intelligent AI in this manner instead.

Take care,

2 Comments on “Framing AI as Life”

By Dale Lehman. February 2nd, 2018 at 12:42 pm

As there have been no comments yet on this post, I wonder if you would turn your attention to a different aspect of this book – I did not read the book, but read the free parts that were available on Amazon. While definitions of what it means to be human are important, I was struck by something else in the book. It appears to describe visions of Life 3.0, an era when artificial intelligence comes of age. The author describes three schools of thought – optimists, skeptics, and the beneficial AI movement – the last of these is clearly where the author’s beliefs lie. And, the author has formed an admirable research institute (FLI – the Future of Life Institute) where many great minds (and a fair amount of funding, including $10 million from Elon Musk) is devoted to making beneficial uses of AI. It is truly an admirable and critical effort.

But I would like you to comment on what seems incredibly naive about this view. There are two problems I see with beneficial AI. First, it is inherently anti-democratic. Since many people (lay persons as well as the majority of AI researchers) do not subscribe to the principles of beneficial AI (the Asimolar Principles, and admirable list that I don’t have complaints about), it would seem that making those principles work would require some power for this movement over the non-believers (or non-subscribers). It appears that the book is silent about how one group of informed and powerful researchers are to predominate over those that don’t sign on to, or practice, these principles. I would maintain that a critical aspect of being human is precisely how humans organize to deal with diversity of opinions and practices. And, AI give me little reason to be optimistic about this. I suppose having an AI legislature may be an improvement over the way our political bodies currently work, but I’m not ready to throw in the towel just yet.

The second naive aspect of the book appears to me to be that these admirable efforts at establishing safe AI practices are doomed to fail. I think the development and use of AI is already beyond what these researchers can contain. Do we really believe that Amazon, or Goolge, or Alibaba will not use AI if it is profitable by violating the safe AI principles? And, if these 3 tech giants manage to withhold from doing harm, that other firms won’t fill that vacuum? For every researcher who signs on to the principles, I suspect there are 3 or 4 more that will not. And, even when a researcher agrees to the principles, I suspect that the principles will become largely irrelevant in the face of commercially useful AI developments that conflict with the principles. How many examples do we need of ethical principles that are widely accepted in principles and widely violated in practice? Naive is what I think the book’s view is – although there is value in at least articulating a principled vision.

I’d appreciate it if you would comment on these aspects of the book.

By stephenfew. February 2nd, 2018 at 5:08 pm


I share your concern that safeguards, no matter how thoughtful and thorough, might be impossible to enforce. I don’t know that we have a choice but to make the attempt, however, because AI development is proceeding. Whether or not general AI will ever come into existence is not a foregone conclusion. It is possible that the apocalyptic outcomes from a superintelligent AI that we fear will never have an opportunity to emerge.

Leave a Reply