Tony Stark is Not a Real Dude

March 2nd, 2018

The world that has emerged from the imagination of Stan Lee and his Marvel Comics colleagues is great fun. In recent years, DeadPool has become my new favorite superhero, with Wolverine close on his heels. Today, however, I want to talk about another Marvel superhero—Iron Man—or more specifically about Tony Stark, the man encased on that high-tech armor.

It’s important that, when we consider fictional characters like this wealthy high-tech entrepreneur and inventor, we clearly distinguish fantasy from reality. Tony Stark isn’t real. Furthermore, no one like Tony Stark actually exists. You know this, right? Our best and brightest high-tech moguls—Bill Gates of Microsoft, Elon Musk of Tesla and SpaceX, Larry Page and Sergey Brin of Google, Mark Zuckerberg of Facebook, and even the late Steve Jobs of Apple—don’t come close to the abilities of Tony Stark. No one does and no one can. Even if we combined all of these guys together into a single person and threw in a top scientist such as Stephen Hawking into the mix, we still wouldn’t have someone who could do what Tony Stark not only does but does with apparent ease.

What am I getting at? There is a tendency today to believe that high-tech entrepreneurs and their inventions are much more advanced than they actually are. High-tech entrepreneurs and their inventions are buggy as hell. Most high-tech products are poorly designed. Even though good technologies can and do provide wonderful benefits, they are not magical in the ways that Marvel’s universe or high-tech marketers suggest. Technologies cannot always swoop in and save us in the nick of time. Furthermore, technologies are not intrinsically good as their advocates often suggest.

We should pursue invention, always looking for that next tool that will extend our reach in useful ways, but we should not bet our future on technological solutions. We dare not allow the Doomsday Clock to approach midnight hoping for a last second invention that will turn back time. We must face the future with a more realistic assessment of our abilities and the limitations of technologies.

Earlier today, in his state of the nation address, Vladimir Putin announced the latest in Russian high-tech innovation: nuclear projectiles that cannot be intercepted. Assuming that his claim is true, and it probably is, Putin has just placed himself and Russia at the top of the potential threats list. A bully with the ability to destroy the world brings back frightening memories of my youth when we had to perform duck-and-cover drills, trusting that those tiny metal and wooden desks would shield us from a nuclear assault.

There is no Tony Stark to save us. Iron Man won’t be paying a visit to Putin to put that bully in his place. As I was reading the news story about Putin’s announcement in the Washington Post this morning, an ad appeared in the middle of the text with a photo of Taylor Swift and the caption “Look Inside Taylor Swift’s New $18 Million NYC Townhouse.” A news story that is the stuff of nightmares is paired with celebrity fluff, lulling us into complacency. If the news story makes you nervous, you can easily escape into Taylor’s luxurious abode and pull her 10,000 thread count satin sheets over your head. Perhaps we have nothing to fear, for our valiant and brave president, Donald Trump, will storm the Kremlin and take care of Putin with his bare hands if necessary (after the hugging is over, of course), just as he would have dispatched that high school shooter in Parkland, Florida.

We have every reason to believe in his altruism and utter superiority in a fight, don’t we? Anything less would be fake news.

Ah, but I digress. I was distracted by the allure of Taylor Swift and her soft sheets. Where was I? Oh yeah, Tony Stark is not a real dude. If we hope to survive, let alone thrive, we’ll need to focus on building character, improving our understanding of the world, and making some inconvenient decisions. Technologies will play a role, but they aren’t the main actors in this real-world drama. We are.

Big Data, Big Dupe: A Progress Report

February 23rd, 2018

My new book, Big Data, Big Dupe, was published early this month. Since its publication, several readers have expressed their gratitude in emails. As you can imagine, this is both heartwarming and affirming. Big Data, Big Dupe confirms what these seasoned data professionals recognized long ago on their own, and in some cases have been arguing for years. Here are a few excerpts from emails that I’ve received:

I hope your book is wildly successful in a hurry, does its job, and then sinks into obscurity along with its topic.  We can only hope! 

I hope this short book makes it into the hands of decision-makers everywhere just in time for their budget meetings… I can’t imagine the waste of time and money that this buzz word has cost over the past decade.

Like yourself I have been doing business intelligence, data science, data warehousing, etc., for 21 years this year and have never seen such a wool over the eyes sham as Big Data…The more we can do to destroy the ruse, the better!

I’m reading Big Data, Big Dupe and nodding my head through most of it. There is no lack of snake oil in the IT industry.

Having been in the BI world for the past 20 years…I lead a small (6 to 10) cross-functional/cross-team collaboration group with like-minded folks from across the organization. We often gather to pontificate, share, and collaborate on what we are actively working on with data in our various business units, among other topics.  Lately we’ve been discussing the Big Data, Big Dupe ideas and how within [our organization] it has become so true. At times we are like ‘been saying this for years!’…

I believe deeply in the arguments you put forward in support of the scientific method, data sensemaking, and the right things to do despite their lack of sexiness.

As the title suggests, I argue in the book that Big Data is a marketing ruse. It is a term in search of meaning. Big Data is not a specific type of data. It is not a specific volume of data. (If you believe otherwise, please identify the agreed-upon threshold in volume that must be surpassed for data to become Big Data.) It is not a specific method or technique for processing data. It is not a specific technology for making sense of data. If it is none of these, what is it?

The answer, I believe, is that Big Data is an unredeemably ill-defined and therefore meaningless term that has been used to fuel a marketing campaign that began about ten years ago to sell data technologies and services. Existing data products and services at the time were losing their luster in public consciousness, so a new campaign emerged to rejuvenate sales without making substantive changes to those products and services. This campaign has promoted a great deal of nonsense and downright bad practices.

Big Data cannot be redeemed by pointing to an example of something useful that someone has done with data and exclaiming “Three cheers for Big Data,” for that useful thing would have still been done had the term Big Data never been coined. Much of the disinformation that’s associated with Big Data is propogated by good people with good intentions who prolong its nonsense by erroneously attributing beneficial but unrelated uses of data to it. When they equate Big Data with something useful, they make a semantic connection that lacks a connection to anything real. That semantic connection is no more credible than attributing a beneficial use of data to astrology. People do useful things with data all the time. How we interact with and make use of data has been gradually evolving for many years. Nothing that is qualitatively different about data or its use emerged roughly ten years ago to correspond with the emergence of the term Big Data.

Although no there is no consensus about the meaning of Big Data, one thing is certain: the term is responsible for a great deal of confusion and waste.

I read an article yesterday titled “Big Data – Useful Tool or Fetish?” that exposes some failures of Big Data. For example, it cites the failed $200,000,000 Big Data initiative of the Obama administration. You might think that I would applaud this article, but I don’t. I certainly appreciate the fact that it recognizes failures associated with Big Data, but its argument is logically flawed. Big Data is a meaningless term. As such, Big Data can neither fail nor succeed. By pointing out the failures of Big Data, this article endorses its existence, and in so doing perpetuates the ruse.

The article correctly assigns blame to the “fetishization of data” that is promoted by the Big Data marketing campaign. While Big Data now languishes with an “increasingly negative perception,” the gradual growth of skilled professionals and useful technologies continue to make good uses of data, as they always have.

Take care,

P.S. On March 6th, Stacey Barr interviewed me about Big Data, Big Dupe. You can find an audio recording of the interview on Stacey’s website.

Different Tools for Different Tasks

February 19th, 2018

I am often asked a version of the following question: “What data visualization product do you recommend?” My response is always the same: “That depends on what you do with data.” Tools differ significantly in their intentions, strengths, and weaknesses. No one tool does everything well. Truth be told, most tools do relatively little well.

I’m always taken by surprise when the folks who ask me for a recommendation fail to understand that I can’t recommend a tool without first understanding what they do with data. A fellow emailed this week to request a tool recommendation, and when I asked him to describe what he does with data, he responded by describing the general nature of the data that he works with (medical device quality data) and the amount of data that he typically accesses (“around 10k entries…across multiple product lines”). He didn’t actually answer my question, did he? I think this was, in part, because he and many others like him don’t think of what they do with data as consisting of different types of tasks. This is a fundamental oversight.

The nature of your data (marketing, sales, healthcare, education, etc.) has little bearing on the tool that’s needed. Even the quantity of data has relatively little effect on my tool recommendations unless you’re dealing with excessively large data sets. What you do with the data—the tasks that you perform and the purposes for which you perform them—is what matters most.

Your work might involve tasks that are somewhat unique to you, which should be taken into account when selecting a tool, but you also perform general categories of tasks that should be considered. Here are a few of those general categories:

  • Exploratory data analysis (Exploring data in a free-form manner, getting to know it in general, from multiple perspectives, and asking many questions to understand it)
  • Rapid performance monitoring (Maintaining awareness of what’s currently going on as reflected in a specific set of data to fulfill a particular role)
  • A routine set of specific analytical tasks (Analyzing the data in the same specific ways again and again)
  • Production report development (Preparing reports that will be used by others to lookup data that’s needed to do their jobs)
  • Dashboard development (Developing displays that others can use to rapidly monitor performance)
  • Presentation preparation (Preparing displays of data that will be presented in meetings or in custom reports)
  • Customized analytical application development (Developing applications that others will use to analyze data in the same specific ways again and again)

Tools that do a good job of supporting exploratory data analysis usually do a poor job of supporting the development of production reports and dashboards, which require fine control over the positioning and sizing of objects. Tools that provide the most flexibility and control often do so by using a programming interface, which cannot support the fluid interaction with data that is required for exploratory data analysis. Every tool specializes in what it can do well, assuming it can do anything well.

In addition to the types of tasks that we perform, we must also consider the level of sophistication to which we peform them. For example, of you engage in exploratory data analysis, the tool that I recommend would vary significantly depending on the depth of your data analysis skills. For instance, I wouldn’t recommend a complex statistical analysis product such as SAS JMP if you’re untrained in statistics, just as I wouldn’t recommend a general purpose tool such as Tableau Software if you’re well trained in statistics, except for performing statistically lightweight tasks.

Apart from the tasks that we perform and the level of skill with which we perform them, we must also consider the size of our wallet. Some products require a significant investment to get started, while others can be purchased for an individual user at little cost or even downloaded for free.

So, what tool do I recommend? It depends. Finding the right tool begins with a clear understanting of what you need to do with data and with your ability to do it.

Take care,

Framing AI as Life

January 31st, 2018

George Lakoff introduced the concept of “framing” in his book Metaphors We Live By. The terms and metaphors that we use to describe things serve as frames that influence our perceptions and our values. In his book, Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark frames future artificial intelligence (AI) as life.

Before proceeding, I should say that I appreciate much of Tegmark’s work. He is one of the few people involved in AI who are approaching the work thoughtfully and carefully. He is striving to safeguard AI development to support the interests of humanity. For this, I am immensely grateful. I believe that framing future AI as life, however, is inappropriate and at odds with the interests of humanity.

The version metaphor (1.0, 2.0, etc.), borrowed from the realm of software development, has been used in recent years to describe new stages in the development of many things. You’re probably familiar with the “Web 2.0” metaphor that Tim O’Reilly introduced several years ago. As the title if his book suggests, Tegmark refers to an imagined future of machines with general intelligence that matches or surpasses our own as “Life 3.0.” How could computers, however intelligent or even sentient, be classified as a new version of life? This is only possible by redefining what we mean by life. Here’s Tegmark’s definition:

Let’s define life very broadly, simply as a process that can retain its complexity and replicate…In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

According to Tegmark, Life 1.0 consisted of all biological organisms prior to humans. It was entirely governed by its DNA. Life 2.0 arose in humans as the ability to alter brain function. Our brains can be rewired to adapt and improve, free from the strict confines of genetic determinism. Life 1.0 was completely determined by its “hardware” (biology only), but Life 2.0 introduced the ability to rewrite its “software” (biology plus culture). Life 3.0, as Tegmark imagines it, will introduce the ability to go beyond rewriting its software to redesigning its hardware as well, resulting in unlimited adaptability. As he frames it, life proceeds from the biological (1.0) to the cultural (2.0) and eventually to the technological (3.0).

Notice how this frame ignores fundamental differences between organisms and machines, and in so doing alters the definition of life. According to the definition that you’ll find in dictionaries, plants and animals—organic entities—are alive; rocks, steel girders, and machines—inorganic entities—are not alive. A computer program that can spawn a copy of itself and place it on improved hardware might correctly be deemed powerful, but not alive.

Why does Tegmark argue that intelligent machines of the future would constitute life? He gives a hint when he writes,

The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species that we’ve encountered so far, let’s instead define life very broadly…

Indeed, definitions are often controversial when we scrutinize them deeply. This is because concepts—the boundaries that we create to group and separate things in our efforts to make sense of the world—are always somewhat arbitrary, but these concepts make abstract thinking and communication possible. Responding to the complexity of definitions by excessively broadening them undermines their usefulness.

Tegmarks seems to be concerned that we would only value and embrace future AI if we classified it as living. Contrary to his concern, maintaining our existing definition of life would not prevent us from discovering new forms in the future. We can imagine and could certainly welcome biological organisms that are quite different from those that are already familiar. It is true, however, that keeping the definition of life firmly tied to biology would certainly and appropriately lead us to classify some newly discovered entities as something other than life. This needn’t concern Tegmark, for we value much that isn’t alive and devalue much that is alive. If super-intelligent AIs ever come into existence, we should think of them as similar to us in some ways and different in others, which is sensible, and we should value them to the degree that they are beneficial, not to the degree to which they qualify as life.

You might think that this is much ado about nothing. Why should we care if the definition of life is stretched to include machines? I care for two reasons: 1) in general, we should create and revise definitions more thoughtfully, and 2) specific to AI, we should recognize that machines are different from us in fundamental ways. To the first point, concepts, encapsulated in definitions, form our perceptions, and how we perceive things largely determines the quality of our lives and the utility of our decisions. To the second point, we dare not forget that the interests of a super-intelligent AI would be very different from our own. Recognizing the ways in which these potential machines of the future would be different from us will serve as a critical reminder that we must approach their development with care.

Tegmark states in the title of his book’s first chapter that AI is “the most important conversation of our time.” This is probably not the most important conversation of our time, but it is certainly important. I’m sharing my concerns as a part of this conversation. If we ever manage to equip computers with intelligence that equals our own, their faster processing speeds and greater storage capacities will enable them to rapidly achieve a level of intelligence that leaves us in the dust. That might justify the designation “Intelligence 3.0,” but not “Life 3.0.” I suggest that we frame super-intelligent AI in this manner instead.

Take care,

Only a Summary

January 30th, 2018

While listening to NPR today, I heard a Republican congressman say that we shouldn’t be concerned about the release of sensitive intelligence in the so-called “Nunes Memo,” which alleges abuses by the FBI and Justice Department in their investigation of Russian interference in the presidential election. Why should we not be concerned? Because the Nunes Memo is “just a summary.” When I heard this I let out an involuntary exclamation of exasperation. This congressman is either naïve or intentionally deceitful in this assessment—probably both.

Anyone who works with data knows that summaries are especially subject to bias and manipulation. Even raw data is biased to some degree, but summaries are much more so, for they are highly subjective and interpretive. This congressman argued that no harm could possibly be done by releasing this summary and allowing members of the general public to assess its merits for themselves. It isn’t possible, of course, to evaluate the merits of the summary without examining the source data on which it is based. The source data, however, is being withheld from the public.

It’s ironic that alleged biases exhibited by some members of the intelligence committee are being countered by an obviously biased, politically motivated summary of those biases. The fact that Republicans are refusing to make public an alternative summary of the data that was prepared by Democrats reveals the obvious political motivation behind the action. Republicans in Congress believe that the public is incredibly stupid. I hope they’re wrong, but ignorance about data and the lack of skills that are needed to make sense of it are indeed rife. What a joke that we live in the co-called “information age.” It is probably more accurate to say that we live in the “misinformation age.”

Take care,

Scholarly Peer Reviews Must Involve Experts

January 23rd, 2018

The manner in which scholarly peer reviews are being performed in some settings today is not serving as an effective gatekeeper. Peer review is supposed to filter out invalid or otherwise inadequate work and to encourage good work. Most of the poor work that is being produced could be blocked from publication if the peer review system functioned as intended.

Although some historical instances of peer review can be traced back to the 18th century, it didn’t become a routine and formal part of the scholarly publication process until after World War II. With dramatic increases in the production of scholarly content by the mid-20th century, a means of filtering out inadequate work became imperative.

According to Wikipedia,

Scholarly peer review (also known as refereeing) is the process of subjecting an author’s scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher…decide whether the work should accepted, considered acceptable with revisions, or rejected.

Peer review requires a community of experts in a given (and often narrowly defined) field, who are qualified and able to perform reasonably impartial review.

Scholarly publications, such as academic journals, are only useful if the claims within them are credible. As such, the peer review process performs a vital role. When the process was first established, it was called peer review based on the assumption that those who produced scholarly work were experts in the relevant field. An expert’s peers are other experts. A “community of experts” is essential to peer review.

Over time, in some fields of study, the production of scholarly work has increasingly involved students who are still fairly early in the process of developing expertise. Corresponding with this transition, peer reviewers also increasingly lack expertise. During their advanced studies, it is absolutely useful for students to be involved in research and the production of scholarly work, but this work should not be published based solely on reviews by their peers. Reviews from anyone who’s interested in the subject matter can potentially provide useful feedback to an author, but only reviews by experts can support the objectives of the peer review process.

Characterizing this problem strictly as one that stems from the involvement of students is not entirely accurate. Scholarly work that is submitted for publication is rarely authored by students alone. Almost always, a professor’s name is attached to the work as well. Unfortunately, even if we assume that a professor is an expert in something, we cannot assume expertise in the domain addressed by the work that’s submitted for publication. In my own field of data visualization, many professors who teach courses and do research in data visualization lack expertise in fundamental aspects of the field. For example, it is not uncommon for professors to focus solely on the development of data visualization software with little or no knowledge of the scientific foundations of data visualization theory or actual experience in the practice of data visualization. One of the first times that I became aware of this, much to my surprise, was when the professor who introduced me when I gave a keynote presentation at the Vis Week Conference a decade ago admitted to me privately that he had little knowledge of data visualization best practices.

Do you know how the expertise of peer reviewers is often determined? Those who apply to participate in the process rate themselves. On every occasion when I participated in the process, I completed a questionnaire that asked me to rate my own level of expertise in various domains. There are perhaps exceptions to this self-rating approach—I certainly hope so—but this appears to be typical in the domains of data visualization, human-computer interaction, and even statistics.

Something is amiss in the peer review process. As long as people who lack expertise are deciding which scholarly works to accept or reject for publication, the quality of published work will continue be unreliable. We dare not forget the importance of expertise.

Take care,

Embrace Complexity

January 2nd, 2018

We live in a complex world. Humans are complex. The natural systems that operate in our world are complex. The systems and technologies that we create are increasingly complex. Despite essential complexities, we prefer to see things simply. This preference, although understandable, is becoming ever more dangerous.

I promote simplicity, but a version that strives to explain complex matters simply, without oversimplifying. Healthy simplification attempts to express complex matters without compromising truth. This form of simplicity never dumbs information down. Embracing complexity is hard work. It is not the realm of lazy minds. This hard work is necessary, however, for we can do great harm when we make decisions based on overly simplified representations of complex matters.

We long for a simple world, but that world does not exist. In the early days of our species, we understood relatively little about our world, but enough to give us an evolutionary edge. As our ability to understand the world gradually developed over time, our ability to survive and thrive usually increased with it, but there are exceptions. The world has always been complex, but we are making it even more so. We create technologies that we cannot understand and, therefore, cannot control. This is careless, irresponsible, and downright scary. In fact, given the potential power of many modern technologies, this is suicidal. Nevertheless, we can derive hope from the fact that our brains are much more capable of handling complexity than we routinely demonstrate. This capability, however, can only be developed through hard work that will never be done until we are convinced of its importance and commit to the effort. Even if most people remain lazy, it is critical that those who influence the decisions that establish our path forward are committed to this work.

An entire discipline, called “systems thinking,” has emerged to address complexity. It strives to see systems holistically—how the parts relate to one another in complex ways to produce systems with outcomes that cannot be understood by looking at those parts independently (i.e., analytically). Chances are, you’ve never heard of systems thinking. (In case you’re interested, a wonderful book titled Thinking in Systems by Donella H. Meadows provides a great introduction to the field.)

It is also encouraging that a few organizations have emerged to encourage deeper, more complex thinking. Personally, I appreciate and support the work of The Union of Concerned Scientists and The Center for Inquiry, which both work hard to expose failures in overly simplified thinking. There are also courageous and disciplined thinkers—Richard Dawkins and Robert Reich come immediately to mind—who raise their voices to warn against these errors.

These disciplines, organizations, and individuals challenge us to embrace complexity. In so doing, they challenge us to embrace the only path that will lead to humanity’s survival. We humans are a grand experiment. We’ve accomplished so much in our brief time on this planet. It would be a shame to let laziness and carelessness end the experiment prematurely. I strongly recommend the following technology-related guideline for human survival:

Never create systems or technologies that are so complex that we cannot understand them.

Where there is no understanding, there is no control. When we create situations beyond our control, we put ourselves at risk. Make understanding a priority, even when it is difficult.

Take care,

Beware Incredible Technology-Enabled Futures

December 27th, 2017

Throughout my life, the future has been envisioned and marketed by many as a technology-enabled utopia. As a child growing up in the greater Los Angeles area, my siblings and I were treated to an annual pilgrimage to Disneyland. Originally created for the 1964-65 World’s Fair, Disneyland’s “Carousel of Progress” presented Walt Disney’s awe-inspiring vision of a future enabled through new technologies.

Although I preferred the speedy thrill of a ride on the bobsleds, I found the Carousel of Progress fascinating. It, and one of my favorite cartoons, the Jetsons, inspired me to believe that technological advances might create a utopia in my lifetime.

Many of the technologies featured in these imaginative futures now exist, but our world is hardly a utopia. In fact, new technologies have created many new nightmares.

As the TED (Technology, Entertainment, and Design) Conference has grown from a single annual event in Monterey, California to a worldwide franchise of TEDx conferences, the ideas of a few well-curated speakers have grown into a huge and ever-expanding collection of talks ranging from brilliant to downright nonsense. While thoughtful ideas are still presented in some TED talks, the speakers are no longer vetted with care.

The point of this article is not to critique TED in general, but to expose the absurdities of a new TED talk titled “Three Steps to Surviving the Robot Revolution,” by Charles Radclyffe.


[Radclyffe, on the left, is pictured with the actor Brent Spiner, who played the robot named Data in Star Trek: The Next Generation]

I don’t browse TED talks, so I only become aware of them when someone brings them to my attention. In this case, I received an email from Charles Radclyffe himself, which opened with the sentence, “I’m emailing you with my TEDx talk details as we’ve previously exchanged emails.” As far as I know, Radclyffe and I have never previously exchanged emails. Nevertheless, I was grateful to hear from him because the TED talk that I found caused me great concern.

In his talk, Radclyffe describes how robotics, automation, and artificial intelligence will together usher in a marvelous world if humans are willing to relinquish their jobs. He acknowledges how much people value employment and feel threatened by the “robot revolution,” but argues that, if we were no longer shackled to jobs, we could spend our time in activities that were vastly more meaningful, useful, and fulfilling.

He makes a distinction between work (“anything that you do with intention”) versus a job (“work that you do with the intention of being paid”), and argues that the former is intrinsically valuable but the latter is not. This distinction, however, does not actually eliminate the value of employment, in part because most of us actually do need to earn a living. Radclyffe resolves this dilemma by promoting an incredibly naïve vision of the future. He argues that, if we would only step aside and let machines perform all of the labor that does not absolutely require “human touch” (a term that he doesn’t clarify but suggests is a rather short list), the products and services produced by machines would be free. That’s right—free! Here’s a direct quote:

If you eliminated human labour from the equation of any particular product or service, it would become free. In the past, we were faced with scarcity, but imagine an abundance economy. What little cost that remains would be eliminated by the market and by competition if we could make those industries that were essential for our survival ones with a minimum of human labour…If we encourage and not resist the pace of change, particularly in essential industries, the very goods and services that we all need to survive could be provided for free.

When I heard these words, my jaw hit the floor. “Are you nuts?,” I thought. Radclyffe’s talk is marketing, not a realistic vision of the future. To buy into this, you must accept two premises: 1) products and services can be produced by machines without cost,  and 2) the providers of these products and services will provide them for free. Neither premise is believable. Even if technologies eliminated all costs, which is not the case, the corporations that owned them would never give them away for free.

Radclyffe goes on to argue that, not only could we live more fulfilling lives if the need for human labor were eliminated, but the products and services created by machines, unlike our products and services today, would function ideally. He illustrates this claim with the example of food production. If humans ceased to be involved in food production, we could easily feed the world aplenty using a system of agriculture that was virtually flawless. Given our experience with agribusiness today, can you imagine the huge corporate owners of robotically automated food production, untethered from human labor, using sustainable practices that protected the environment to produce food that provided optimal nutrition for humans? I cannot.

I’ve spent the last 35 years helping people derive value from data using information technologies. I’m responsible for several technological innovations in the field of data visualization. As an experienced technologist who has been doing this for a while, my expectations of technologies are realistic, not pie in the sky visions of bliss. I have a passionate love/hate relationship with technologies. I love them when they’re needed and work well, but I hate them when they’re used to do what we should do ourselves or when they work poorly. Many technologies now exist that we would be better off without and many technologies, especially information technologies, work abysmally. For some strange reason we have learned to give information technologies a pass, tolerating poor quality in these devices that we would never tolerate elsewhere.

Technologies, including robotics, automation, and artificial intelligence, will have an important role to play in our future. They will only play that role well, however, if we approach them thoughtfully and hold them to high standards of ethics and performance. If we ever do create a world that borders on utopia, technologies will no doubt assist, but they will not be the cause. This will come to pass primarily because we’ve progressed as human beings. To progress, we need to look inward and work hard to become our best selves. Technologies will not save us. To survive and flourish, we will need to be our own saviors.

Take care,

There’s Nothing Mere About Semantics

December 13th, 2017

Disagreements and confusion are often characterized as mere matters of semantics. There is nothing “mere” about semantics, however. Differences that are based in semantics can be insidious, for we can differ semantically without even realizing it. It is our shared understanding of word meanings that enables us to communicate. Unfortunately, our failure to define our terms clearly lies at the root of countless misunderstandings and a world of confusion.

Language requires definitions. Definitions and how they vary depending on context are central to semantics. We cannot communicate effectively unless those to whom we speak understand how we define our terms. Even in particular fields of study and practice, such as my field of data visualization, practitioners often fail to define even its core terms in ways that are shared. This leads to failed discussions, a great deal of confusion, and harm to the field.

The term “dashboard” has been one of the most confusing in data visualization since it came into common use about 15 years ago. If you’re familiar with my work, you know that I’ve lamented this problem and worked diligently to resolve it. In 2004, I wrote an article titled “Dashboard Confusion” that offered a working definition of the term. Here’s the definition that appeared in that article:

A dashboard is a visual display of the most important information needed to achieve one or more objectives that has been consolidated on a single computer screen so it can be monitored at a glance.

Over the years, I refined my original definition in various ways to create greater clarity and specificity. In my Dashboard Design course, in addition to the definition above, eventually I began to share the following revised definition as well:

A dashboard is a predominantly visual information display that people use to rapidly monitor current conditions that require a timely response to fulfill a specific role.

Primarily, I revised my original definition to emphasize that the information most in need of a dashboard—a rapid-monitoring display—is that which requires a timely response. Knowing what to display on a dashboard, rather than in other forms of information display, such as monthly reports, is one of the fundamental challenges of dashboard design.

Despite my steadfast efforts to promote clear guidelines for dashboard design, confusion persists because of the diverse and conflicting ways in which people define the term, some of which are downright nonsensical.

When Tableau Software first added the ability to combine multiple charts on a single screen in their product, I encouraged them to call it something other than a dashboard, knowing that doing so would contribute to the confusion. The folks at Tableau couldn’t resist, however, because the term “dashboard” was popular and therefore useful for marketing and sales. Unfortunately, if you call any display that combines multiple charts for whatever reason a dashboard, you can say relatively little about effective design practices. This is because designs, to be effective, must vary significantly based on how and for what purpose the information is used. For example, how we should design a display that’s used for rapidly monitoring—what I call a dashboard—is different in many ways from how we should design a display that’s used for exploratory data analysis.

To illustrate the ongoing prevalence of this problem, we don’t need to look any further than the most recent book of significance that’s been written about dashboards: The Big Book of Dashboards, by Steve Wexler, Jeffrey Shaffer, and Andy Cotgreave. The fact that all three authors are avid users and advocates of Tableau Software is reflected in their definition of a dashboard and in the examples of so-called dashboards that appear in the book. These examples share nothing in common other than the fact that they include multiple charts.

When one of the authors told me about his plans for the book as he and his co-authors were just beginning to collect examples, I strongly advised that they define the term dashboard clearly and only include examples that fit that definition. They did include a definition in the book, but what they came up with did not address my concern. They apparently wanted their definition to describe something in particular—monitoring—but the free-ranging scope of their examples prevented them from doing so exclusively. Given this challenge, they wrote the following definition:

A dashboard is a visual display of data used to monitor conditions and/or facilitate understanding.

Do you see the problem? Stating that a dashboard is used for monitoring conditions is specific. So far, so good. Had they completed the sentence with “and facilitate understanding,” the definition would have remained specific, but they didn’t. The problem is their inclusion of the hybrid conjunction: “and/or.” Because of the “and/or,” according to their definition a dashboard is any visual display whatsoever, so long as it supports monitoring or facilitates understanding. In other words, any display that 1) supports monitoring but doesn’t facilitate understanding, 2) facilitates understanding but doesn’t support monitoring, or 3) both supports monitoring and facilitates understanding, is a dashboard. Monitoring displays, analytical displays, simple lookup reports, even infographics, are all dashboards, as long as they either support monitoring or facilitate understanding. As such, the definition is all-inclusive to the point of uselessness.

Only 2 of the 28 examples of displays that appear in the book qualify as rapid-monitoring displays. The other 26 might be useful for facilitating understanding, but by including displays that share nothing in common except that they are all visual and include multiple charts, the authors undermined their own ability to teach anything that is specific to dashboard design. They provided useful bits of advice in the book, but they also added to the confusion that exists about dashboards and dashboard design.

In all disciplines and all aspects of life, as well, we need clarity in communication. As such, we need clearly defined terms. Using terms loosely creates confusion. It’s not just a matter of semantics. Semantics matter.

Take care,

New Book: Big Data, Big Dupe

December 6th, 2017

I’ve written a new book, titled Big Data, Big Dupe, which will be published on February 1, 2018.

As the title suggests, it is an exposé on Big Data—one that is long overdue. To give you an idea of the content, here’s the text that will appear on the book’s back cover:

Big Data, Big Dupe is a little book about a big bunch of nonsense. The story of David and Goliath inspires us to hope that something little, when armed with truth, can topple something big that is a lie. This is the author’s hope. While others have written about the dangers of Big Data, Stephen Few reveals the deceit that belies its illusory nature. If “data is the new oil,” Big Data is the new snake oil. It isn’t real. It’s a marketing campaign that has distracted us for years from the real and important work of deriving value from data.

Here’s the table of contents:

As you can see, unlike my four other books, this is not about data visualization, but it is definitely relevant to all of us who are involved in data sensemaking. If the nonsense of Big Data is making your work difficult and hurting your organization, this is a book that you might want to leave on the desks of your CEO and CIO. It’s short enough that they might actually read it.

Big Data, Big Dupe is now available for pre-order.

Take care,