Digital Thoreau

February 13th, 2019

In 1854, Henry David Thoreau’s thoughtful account of his years in the woods at Walden Pond was published. The book, Walden, is filled with insights that could only be acquired through quiet reflection on life’s essentials. One of my favorite quotes from the book, which I’ve used often in my work, is “Simplicity, simplicity, simplicity.” Even back then in the mid-19th century, long before the distractions of our modern digital world, Thoreau recognized the importance of choosing how we spend our time with great care. The value of his cautionary guidance is even greater today, for the digital distractions that vie for our attention are more prolific, insidious, and potentially harmful than those that Thoreau encountered. In the spirit of Thoreau, Cal Newport has written a wonderful new book to help us live with greater intention and less distraction in the modern world, titled Digital Minimalism: Choosing a Focused Life in a Noisy World.

Newport has rapidly become one of my favorite thinkers and writers about technology. Back in 2016, I reviewed one of his previous books, Deep Work, which I dearly love. Newport and I are both digital professionals who approach technologies with careful consideration. Neither of us are anti-technology. Instead, we understand that technologies are not inherently good, and that we should only embrace those that are truly useful and only do so in a way that preserves that usefulness without inviting waste or harm.

Several thoughtful writers in recent years have pointed out how digital technologies have been designed to harvest our attention for profit. Tim Wu, who wrote The Attention Merchants, is perhaps foremost among them. Newport addresses this concern by providing a prescription for managing the harmful effects of digital technologies. He calls this prescription digital minimalism, which he defines as:

A philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value, and then happily miss out on everything else.

His prescription goes beyond tips and tricks, such as occasional digital sabbaths. It is more thorough, as the situation demands.

The problem is that small changes are not enough to solve our big issues with new technologies. The underlying behaviors we hope to fix are ingrained in our culture, and…they’re backed by powerful psychological forces that empower our base instincts. To reestablish control, we need to move beyond tweaks and instead rebuild our relationship with technology from scratch, using our deeply held values as a foundation.

Digital technologies are shaping cultures and minds, often in harmful ways. Before the words “Simplicity, simplicity, simplicity” in Walden appears the sentence “Our life is frittered away by detail.” At no time in the past has this been truer than it is today. The constant ding of incoming text messages, barely informative “Likes” on Facebook, and the endless queue of tweets that tether us to our smartphones are frittering our lives away. Digital technologies can add great value to our lives, but not if we embrace them indiscriminately.

Announcing “The Data Loom”

January 14th, 2019

When I wrote my most recent book, Big Data, Big Dupe, in early 2018, I thought it might be my last. As it turns out, I was mistaken. Mid-way through 2018, I became concerned enough about a particular problem to write another book, titled The Data Loom: Weaving Understanding by Thinking Critically and Scientifically with Data.

To give you an idea of its contents, here’s the text that will appear on the book’s back cover:

Contrary to popular myth, we do not yet live in the “Information Age.” At best, we live the “Data Age,” obsessed with the production, collection, storage, dissemination, and monetization of digital data. But data, in and of itself, isn’t valuable. Data only becomes valuable when we make sense of it.

We rely on “information professionals” to help us understand data, but most fail in their efforts. Why? Not because they lack intelligence or tools, but mostly because they lack the necessary skills. Most information professionals have been trained primarily in the use of data analysis tools (Tableau, PowerBI, Qlik, SAS, Excel, R, etc.), but even the best tools are only useful in the hands of skilled individuals. Anyone can pick up a hammer and pound a nail, but only skilled carpenters can use a hammer to build a reliable structure. Making sense of data is skilled work, and developing those skills requires study and practice.

Weaving data into understanding involves several distinct but complementary thinking skills. Foremost among them are critical thinking and scientific thinking. Until information professionals develop these capabilities, we will remain in the dark ages of data.

This book is for information professionals, especially those who have been thrust into this important work without having a chance to develop these foundational skills. If you’re an information professional and have never been trained to think critically and scientifically with data, this book will get you started. Once on this path, you’ll be able to help usher in an Information Age worthy of the name.

And here’s an outline of the book’s contents:

Chapter 1 – Construct a Loom

Data sensemaking—the ability to weave data into understanding—requires a spectrum of skills. Critical thinking and scientific thinking are foremost among them.

Chapter 2 – Think Critically

When we think critically, we apply logic and avoid cognitive biases.

Chapter 3 – Think Scientifically

When we think scientifically, we apply the scientific method.

Chapter 4 – Question the Data

Thinking critically and scientifically leads us to ask essential questions about data to improve the reliability of our findings.

Chapter 5 – Measure Wisely

Metrics can be powerful, but we often measure the wrong things, measure the right things ineffectively, and use measurements in harmful ways.

Chapter 6 – Develop Good Thinking Habits

In addition to critical thinking and scientific thinking, data sensemaking is also enriched by developing good thinking habits.

Chapter 7 – Develop a Data-Sensemaking Culture

Effective data sensemaking is undermined by most organizational cultures. We must promote the cultural changes that are needed to embrace critical and scientific thinking with data.

Epilogue – Embrace the Opportunity

The Data Loom is scheduled for publication in May by Analytics Press.

The Malady of Lost Connections

August 14th, 2018

I just finished reading the most important book that I’ve encountered in years: “Lost Connections,” by Johann Hari. It succeeds in doing what its subtitle claims: “Uncovering the real causes of depression—and the unexpected solutions.”

As one of the many people who have struggled with depression, I greatly appreciate the insights that Hari shares in this book. My appreciation extends well beyond this, however, for this book isn’t just about depression and anxiety. It is also a thoughtful and thoroughly researched assessment of modern society. Depression and anxiety are symptoms of deep and systemic flaws in our modern, industrialized, consumerized, and technologized world, which has caused us to lose vital connections that are essential to human fulfillment. As it turns out, depression and anxiety are clear signals that something is very much amiss with our world.

If you collect all of the research data regarding anti-depressants—not just what the pharmaceutical industry has made public—and assess it without bias, you will find that depression is not the result of a chemical imbalance. There is no credible evidence that boosting serotonin actually reduces depression beyond the placebo effect. Furthermore, even though research indicates that some genes can predispose us to respond to our circumstances with depression, the causes of depression do not reside in human biology. Rather, they are rooted in human society and, in some cases, in traumatic experiences. Nevertheless, depression and anxiety are almost always treated by the medical community with drugs that are designed to correct a chemical imbalance that isn’t the cause. This approach has failed miserably.

As the book’s title suggests, depression and anxiety are rooted in disconnections:

  • Disconnection from meaningful work
  • Disconnection from other people
  • Disconnection from meaningful values
  • Disconnection from childhood trauma
  • Disconnection from status and respect
  • Disconnection from the natural world
  • Disconnection from a hopeful or secure future

It took Hari several years to track down the data and interview the experts, resulting in an incredible story. These disconnections are intricately woven into the fabric of modern society. Nevertheless, there are still places where depression and anxiety are rare. In those places still exist the connections that have been disrupted elsewhere.

There are steps that we can and should take as individuals to reestablish the connections that are vital to our lives, but the full solution lies in societal change. Hari lays out many of the steps that we can take to make this happen. Societal change isn’t easy and it takes time, but it’s the only thorough and lasting solution. The change that’s needed doesn’t require the rejection of useful advances in science and technology, but we must embrace these artifacts of modernity more intelligently and with greater care.

Please read this book. Please contribute to the restoration of connections in society that humankind sorely needs to endure and thrive.

The Perils of Technochauvinism

August 1st, 2018

More and more these days people are waking up to the fact that digital technologies often fail us and sometimes do great harm. The default assumption that digital technologies are always needed and beneficial is now being questioned by an increasing number of thoughtful people who understand these technologies well. One such person is Meredith Broussard, who, in her new book, Artificial Unintelligence, labels this erroneous assumption technochauvinism.

“Technochauvinism” is roughly equivalent to Evgeny Morozov’s term “technological solutionism,” which I’ve been using for years. Better than other writers so far, Broussard explains the nature of digital technologies—what they are, how they work, what they do well, the ways in which they’re limited, and how they fail—in a manner that’s practical and accessible to anyone who’s interested. As an accomplished journalist, her writing is clear and rooted in evidence. As an experienced digital technologist, what she says is well informed.

As the title suggests, much of this book focuses on artificial intelligence (AI), which is fitting given the prolific hype and common misunderstandings that obscure AI technologies in particular. Broussard explains what artificial intelligence is and isn’t. While others have described the important distinction between general AI and narrow AI, Broussard explains this difference more clearly and illustrates AI more realistically, using interesting examples. In one chapter, she walks readers through the use of algorithms to make sense of who survived the Titanic disaster, and in so doing reveals both the strengths and weaknesses of machine learning. In another chapter, she takes readers along on a ride in an autonomous vehicle to illustrate the dangers of AI that overreaches. She puts the proper application of AI into perspective.

Apart from AI in particular, she also describes the historical roots of technochauvinism as a byproduct of the worldview that is shared by most of high tech’s power elite. When this limited, self-serving worldview is incorporated into digital technologies, problems result, often promoting injustice.

Computers compute—they do math. As such, they’re better than humans at doing tasks that are based on mathematical computations. Despite the ubiquitous metaphor, computers are not like human brains. Computers don’t think, they aren’t sentient, even though terms such as artificial intelligence and machine learning suggest otherwise. Computers excel at computationally-based tasks, but we can’t rely on them to perform tasks that require understanding, which they lack, without humans in the loop.

This book will not be well received by those who are so invested in digital technologies that they refuse to think critically about them. I’ve already noticed a few undeserved, negative reviews of this book on Amazon that reflect this closed-minded, self-serving perspective. Writing this book took courage. You don’t write a book like this to gain popularity or make money. You do it because you care deeply about the world. This book speaks the truth—a truth that needs to be heard.

The Danger of Technological Arrogance

June 10th, 2018

Technologists have never been particularly good at considering the ramifications of their work apart from those outcomes that are intended and beneficial. Only the rare technologist is inclined to step back from the work and think deeply about unintended and harmful consequences. Technologists want to feel good about their work and they want to rock the world with their achievements as quickly as possible. Taking time to consider the larger implications of their work is annoyingly inconvenient. This shortsighted and myopic perspective is dangerous.

This is especially true regarding AI research and development. The potential risk of a superintelligent AI departing from the interests of humanity to pursue its own interests, assuming that general intelligence in a machine can be achieved, is all too real. No one who is informed about AI can responsibly ignore the risks or claim that they don’t exist.

With this in mind, I grew concerned while reading a thoughtful article published on June 9, 2018 in the New York Times titled “Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots” by Cade Metz. At an exclusive conference organized by Jeff Bezos that took place in March of this year, Rodney Brooks, an MIT roboticist, debated the potential dangers of AI with neuroscientist and philosopher Sam Harris. When Harris warned that, because the world was in an arms race toward AI, researchers might not have the time needed to ensure that superintelligence is built in a safe way, Brooks responded: “This is something you have made up,” implying that Harris’ argument was not based on evidence and reason. Spurred on by Brooks, Oren Etzioni, who leads the Allen Institute for Artificial Intelligence, laid into Harris as well. He argued:

Today’s AI systems are so limited, spending so much time worrying about superintelligence just doesn’t make sense. The people who take Mr. Musk’s side are philosophers, social scientists, writers — not the researchers who are working on AI.

Etzioni further claimed that worrying about superintelligence “is very much a fringe argument.” Anyone familiar with this debate, however, knows that concerns about AI do not reside only on the fringes and their are certainly not “made up.” Etzioni cannot actually believe this without a great deal of self-imposed, head-in-the-sand delusion.

What especially concerned me about Etzioni’s position was the insinuation that only AI researchers were qualified to have informed opinions on the matter. This is the height of technological arrogance. As a longtime information technologist myself, I have always been annoyed by the tendency of my colleagues to see themselves as the smartest guys in the room. The intelligence of technologists is no greater on average than the intelligence of philosophers, social scientists, and writers. In fact, it’s quite possible that a comparison of intelligence, if such a thing were feasible, would relegate technologists to an inferior position. In my experience, philosophers, social scientists, and writers tend to think more broadly and with greater nuance than most technologists. Ethical considerations are more central to their training and therefore to their thinking. They are much more likely to consider the potential threats of technologies than technologists themselves.

We cannot leave decisions about the future of AI solely in the hands of AI researchers and developers any more than we can leave decisions about the protection and use of our personal data in the hands of social media executives and their employees. Recent findings regarding Facebook have underscored this fact. Despite their well-publicized rhetoric about “making the world a better place,” people who run technology companies are motivated by personal interests that bias their perspectives. They want to believe in the inherent good of their creations. The technologies that we create and the ways that we design them must be subject to thoughtful discussion that extends well beyond technologists. Technologies affect us all, and not always for good.

“Technically Wrong” Is Absolutely Right

April 11th, 2018

I’ve worked in high tech for 35 years. Over the years I’ve developed a love-hate relationship with this industry. I love technologies that are needed and work well. I love technology companies that respect their customers and employees. All too often, however, technologies and the companies that make them don’t deserve our love. Sara Wachter-Boettcher echoes this sentiment in her wonderful book Technically Wrong. Sara is not anti-technology, but she firmly believes that we should hold technologies and the companies that create them responsible for their failures, especially when they do harm.

Systemic problems in the ways that tech companies are managed and products are created are surfacing more and more often these days. In the last few days, Facebook is the tech company whose irresponsible behavior has dominated the news. Facebook is not alone. Tech companies can function responsibly and ethically, but those that do are the exceptions, not the norm. Tech companies have created the mystique that they are special, and for this reason we give them a pass. I’ve always been uncomfortable with this mystique, which veils the dysfunction of tech companies. People who work in tech are no more special on average than those who work in other organizations. They are neither smarter nore more talented, despite the fact that they are compensated as if they were.

Most tech companies are dominated by the rather narrow perspective of privileged white men, which contributes to many of their problems. Their lack of diversity and assumption that they’re smarter than others leads to a myopic view of the world—one that misunderstands the needs of a large portion of their users. They think of a significant portion of their users as “edge cases,” and edge cases aren’t significant enough to consider.

Yes, I’m a privileged white guy myself, but I know that my success has been due in many respects to good fortune—the luck of privileged birth. Perhaps my background in the humanities and social sciences has helped me to see the world more broadly than many of my privileged high-tech brethren.

The book Technically Wrong exposes these problems eloquently and suggests solutions. Here’s the description that appears on the book’s dust cover:

Buying groceries, tracking our health, finding a date: whatever we want to do, odds are that we can now do it online. But few of us ask why all these digital products are designed the way they are. It’s time we change that. Many of the services we rely on are full of oversights, biases, and downright ethical nightmares. Chatbots that harass women. Signup forms that fail anyone who’s not straight. Social media sites that send peppy messages about deal relatives. Algorithms that put more black people behind bars.

Sara Wachter-Boettcher takes an unflinching look at the values, processes, and assumptions that lead to these and other problems. Technically Wrong demystifies the tech industry, leaving those of us on the other side of the screen better prepared to make informed choices about the services we use—and demand more from the companies behind them.

We should have started demanding more of tech companies long ago. If we had, many problems could have been prevented. It’s not too late, however, to turn this around, and turn it around we must.

Know Your Audience — Good Luck with That

March 15th, 2018

I’ve long appreciated the fact that knowing your audience is an important prerequisite for effective communication. Over time, however, I’ve learned that this can rarely be achieved with specificity. The reason is simple: audiences are rarely homogeneous. If your audience is composed of two or more people, it is to some extent diverse. Consequently, it is only possible to finely tailor communication for an audience of one, and even then it’s challenging.

In most scenarios, we should do our best to communicate in ways that work well for people in general rather than for particular individuals. At best, we can assess the interests, abilities, proclivities, and experiences of our audience to determine a range of communication approaches that are suitable and to perhaps discard some approaches that don’t fit. For example, if you were the warm-up act at a Trump political rally, you could safely assume that discourse suitable for a convention of physicists should be avoided. You could also assume that emotionally charged statements would carry more weight for most of your audience than a rational presentation of facts. (To be fair, this is true of most audiences.) You could not, however, narrow your approach to suit people who exhibit a particular intelligence as defined by Howard Gardner’s seven intelligences (visual-spatial, bodily kinesthetic, musical, etc.), although you could certainly cover the same content in multiple ways to broaden its effectiveness. As diversity in audiences increases, our communication approach must increasingly be informed by general rather than specific principles of communication. In the business of communication, knowing what works best for most people is more often useful than knowing what works best for particular people.

In the interest of communicating in the ways that suit people’s interests, abilities, proclivities, and experiences, we often shape our audiences to narrow their diversity. Schools do this by grouping students into grade levels and by offering multiple courses in a particular subject to suit the interests and abilities of particular groups. With unlimited time and resources, we could finely select our audiences to match a tailored communication approach, but this isn’t practical.

One of the best ways to accommodate the diverse needs of an audience is to practice empathy. If we can see them, we can pay attention to them. We can read their reactions. In my data visualization workshops, I’ve always limited the number of participants to 70, in part to make sure that I could see everyone well enough to read their reactions and adapt my teaching accordingly. Obviously, there are limits to what I can discern in facial expressions and physical gestures, but such cues can be quite informative. It is also for this reason that I’ve never taught my courses remotely, but only in face-to-face settings. Web-based courses, though sometimes necessary given the circumstances, are an inferior substitute for face-to-face interaction.

Another way that we can accommodate the diverse needs of an audience is to address the same content in multiple ways. Though redundant to some degree, this redundancy is useful and it doesn’t annoy the audience. It takes more time to cover the same content in multiple ways, so it comes with a cost, but it usually pays off.

“Know your audience” is useful advice, but it can only be applied to communications in limited ways. In the business of communications, it is more useful overall to understand how people process information in general and to base most of our communications on that knowledge.

Take care,

Randomness is Often Not Random

March 12th, 2018

In statistics, what we often identify as randomness in data is not actually random. Bear in mind, I am not talking about randomly generated numbers or random samples. Instead, I am referring to events about which data has been recorded. We learn of these events when we examine the data. We refer to an event as random when it is not associated with a discernible pattern or cause. Random events, however, almost always have causes. We just don’t know them. Ignorance of cause is not the absence of cause.

Randomness is sometimes used as an excuse for preventable errors. I was poignantly reminded of this a decade or so ago when I became the victim of a so-called random event that occurred while undergoing one of the most despised medical procedures known to humankind: a colonoscopy. In my early fifties at the time, it was my first encounter with this dreaded procedure. After this initial encounter, which I’ll now describe, I hoped that it would be my last.

While the doctor was removing one of five polyps that he discovered during his spelunking adventure into my dark recesses, he inadvertently punctured my colon. Apparently, however, he didn’t know it at the time, so he sent me home with the encouraging news that I was polyp free. Having the contents of one’s colon leak out into other parts of the body isn’t healthy. During the next few days severe abdominal pain developed and I began to suspect that my 5-star rating was not deserved. Once admitted to the emergency room at the same facility where my illness was created, a scan revealed the truth of the colonoscopic transgression. Thus began my one and only overnight stay so far in a hospital.

After sharing a room with a fellow who was drunk out of his mind and wildly expressive, I hope to never repeat the experience. Things were touch and go for a few days as the medical staff pumped me full of antibiotics and hoped that the puncture would seal itself without surgical intervention. Had this not happened, the alternative would have involved removing a section of my colon and being fitted with a stylish bag for collecting solid waste. To make things more frightening than they needed to be, the doctor who provided this prognosis failed to mention that the bag would be temporary, lasting only about two months while my body ridded itself of infection, followed by another surgery to reconnect my plumbing.

In addition to a visit from the doctor whose communication skills and empathy were sorely lacking, I was also visited during my stay by a hospital administrator. She politely explained that punctures during a routine colonoscopy are random events that occur a tiny fraction of the time. According to her, these events should not to be confused with medical error, for they are random in nature, without cause, and therefore without fault. Lying there in pain, I remember thinking, but not expressing, “Bullshit!” Despite the administrator’s assertion of randomness, the source of my illness was not a mystery. It was that pointy little device that the doctor snaked up through my plumbing for the purpose of trimming polyps. Departing from its assigned purpose, the trimmer inadvertently forged a path through the wall of my colon. This event definitely had a cause.

Random events are typically rare, but the cause of something rare is not necessarily unknown and certainly not unknowable. The source of the problem in this case was known, but what was not known was the specific action that initiated the puncture. Several possibilities existed. Perhaps the doctor involuntarily flinched in response to an itch. Perhaps he was momentarily distracted by the charms of his medical assistant. Perhaps his snipper tool got snagged on something and then jerked to life when the obstruction was freed. Perhaps the image conveyed from the scope to the computer screen lost resolution for a moment while the computer processed the latest Windows update. In truth, the doctor might have known why the puncture happened, but if he did, he wasn’t sharing. Regardless, when we have reliable knowledge of several potential causes, we should not ignore an event just because we can’t narrow it down to the specific culprit.

The hospital administrator engaged in another bit of creative wordplay during her brief intervention. Apparently, according to the hospital, and perhaps to medical practice in general, something that happens this rarely doesn’t actually qualify as an error. Rare events, however harmful, are designated as unpreventable and therefore, for that reason, are not errors after all. This is a self-serving bit of semantic nonsense. Whether or not rare errors can be easily prevented, they remain errors.

We shouldn’t use randomness as an excuse for ongoing ignorance and negligence. While it makes no sense to assign blame without first understanding the causes of undesirable events, it also makes no sense to dismiss them as inconsequential and as necessarily beyond the realm of understanding. Think of random events as invitations to deepen our understanding. We needn’t make them a priority for responsive action necessarily, for other problems that are understood might deserve our attention more, but we shouldn’t dismiss them either. Randomness should usually be treated as a temporary label.

Take Care,

When Metrics Do Harm

March 6th, 2018

We are obsessed with data. One aspect of this obsession is our fixation on metrics. Quantitative measures—metrics—can be quite useful for monitoring and managing performance, but only when they are skillfully used in the right circumstances for the right purposes. In his wonderful new book, The Tyranny of Metrics, Jerry Muller convincingly argues that the balance has shifted toward counterproductive and often harmful misuses of metrics.

As an historian, Muller brought a high degree of scholarship to his examination of metrics. I’ll let the description that appears on the inside flap of the book’s slip cover give you sense of its contents.

Today, organizations of all kinds are fueled by the belief that the path to success is quantifying human performance, publicizing the results, and dividing up the rewards based on the numbers. But in our zeal to instill the evaluation process with scientific rigor, we’ve gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions. In this timely and powerful book, Jerry Muller uncovers the damage our obsession with metrics is causing—and shows how we can begin to fix the problem.

Filled with examples from education, medicine, business and finance, government, the police and military, and philanthropy and foreign aid, this brief and accessible book explains why the seemingly irresistible pressure to quantify performance distorts and distracts, whether by encouraging “gaming the stats” or “teaching to the test.” That’s because what can and does get measured is not always worth measuring, may not be what we really want to know, and may draw effort away from the things we care about. Along the way, we learn why paying for measured performance doesn’t work, why surgical scorecards may increase deaths, and much more. But metrics can be good when used as a complement to—rather than a replacement for—judgment based on personal experience, and Muller also gives examples of when metrics have been beneficial

Complete with a checklist of when and how to use metrics, The Tyranny of Metrics is an essential corrective to a rarely questioned trend that increasingly affects us all.

I appreciate it when thoughtful people courageously challenge popular opinion by questioning what we blindly assume is good. It is the rare individual who struggles to row against the current. It is in this direction that we must set our course, however, when the wellspring of truth is located upstream.

Many skilled professionals who work with metrics already recognize ways in which metrics do harm when they are ill-defined, inappropriately chosen, improperly measured, or misapplied. If you’re one of these professionals, this book will help you make your concerns heard above the din that keeps your organization distracted and confused. This is a welcome voice of sanity in a world that worships data but seldom uses it meaningfully and skillfully.

Tony Stark is Not a Real Dude

March 2nd, 2018

The world that has emerged from the imagination of Stan Lee and his Marvel Comics colleagues is great fun. In recent years, DeadPool has become my new favorite superhero, with Wolverine close on his heels. Today, however, I want to talk about another Marvel superhero—Iron Man—or more specifically about Tony Stark, the man encased on that high-tech armor.

It’s important that, when we consider fictional characters like this wealthy high-tech entrepreneur and inventor, we clearly distinguish fantasy from reality. Tony Stark isn’t real. Furthermore, no one like Tony Stark actually exists. You know this, right? Our best and brightest high-tech moguls—Bill Gates of Microsoft, Elon Musk of Tesla and SpaceX, Larry Page and Sergey Brin of Google, Mark Zuckerberg of Facebook, and even the late Steve Jobs of Apple—don’t come close to the abilities of Tony Stark. No one does and no one can. Even if we combined all of these guys together into a single person and threw in a top scientist such as Stephen Hawking into the mix, we still wouldn’t have someone who could do what Tony Stark not only does but does with apparent ease.

What am I getting at? There is a tendency today to believe that high-tech entrepreneurs and their inventions are much more advanced than they actually are. High-tech entrepreneurs and their inventions are buggy as hell. Most high-tech products are poorly designed. Even though good technologies can and do provide wonderful benefits, they are not magical in the ways that Marvel’s universe or high-tech marketers suggest. Technologies cannot always swoop in and save us in the nick of time. Furthermore, technologies are not intrinsically good as their advocates often suggest.

We should pursue invention, always looking for that next tool that will extend our reach in useful ways, but we should not bet our future on technological solutions. We dare not allow the Doomsday Clock to approach midnight hoping for a last second invention that will turn back time. We must face the future with a more realistic assessment of our abilities and the limitations of technologies.

Earlier today, in his state of the nation address, Vladimir Putin announced the latest in Russian high-tech innovation: nuclear projectiles that cannot be intercepted. Assuming that his claim is true, and it probably is, Putin has just placed himself and Russia at the top of the potential threats list. A bully with the ability to destroy the world brings back frightening memories of my youth when we had to perform duck-and-cover drills, trusting that those tiny metal and wooden desks would shield us from a nuclear assault.

There is no Tony Stark to save us. Iron Man won’t be paying a visit to Putin to put that bully in his place. As I was reading the news story about Putin’s announcement in the Washington Post this morning, an ad appeared in the middle of the text with a photo of Taylor Swift and the caption “Look Inside Taylor Swift’s New $18 Million NYC Townhouse.” A news story that is the stuff of nightmares is paired with celebrity fluff, lulling us into complacency. If the news story makes you nervous, you can easily escape into Taylor’s luxurious abode and pull her 10,000 thread count satin sheets over your head. Perhaps we have nothing to fear, for our valiant and brave president, Donald Trump, will storm the Kremlin and take care of Putin with his bare hands if necessary (after the hugging is over, of course), just as he would have dispatched that high school shooter in Parkland, Florida.

We have every reason to believe in his altruism and utter superiority in a fight, don’t we? Anything less would be fake news.

Ah, but I digress. I was distracted by the allure of Taylor Swift and her soft sheets. Where was I? Oh yeah, Tony Stark is not a real dude. If we hope to survive, let alone thrive, we’ll need to focus on building character, improving our understanding of the world, and making some inconvenient decisions. Technologies will play a role, but they aren’t the main actors in this real-world drama. We are.