Framing AI as Life

January 31st, 2018

George Lakoff introduced the concept of “framing” in his book Metaphors We Live By. The terms and metaphors that we use to describe things serve as frames that influence our perceptions and our values. In his book, Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark frames future artificial intelligence (AI) as life.

Before proceeding, I should say that I appreciate much of Tegmark’s work. He is one of the few people involved in AI who are approaching the work thoughtfully and carefully. He is striving to safeguard AI development to support the interests of humanity. For this, I am immensely grateful. I believe that framing future AI as life, however, is inappropriate and at odds with the interests of humanity.

The version metaphor (1.0, 2.0, etc.), borrowed from the realm of software development, has been used in recent years to describe new stages in the development of many things. You’re probably familiar with the “Web 2.0” metaphor that Tim O’Reilly introduced several years ago. As the title if his book suggests, Tegmark refers to an imagined future of machines with general intelligence that matches or surpasses our own as “Life 3.0.” How could computers, however intelligent or even sentient, be classified as a new version of life? This is only possible by redefining what we mean by life. Here’s Tegmark’s definition:

Let’s define life very broadly, simply as a process that can retain its complexity and replicate…In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

According to Tegmark, Life 1.0 consisted of all biological organisms prior to humans. It was entirely governed by its DNA. Life 2.0 arose in humans as the ability to alter brain function. Our brains can be rewired to adapt and improve, free from the strict confines of genetic determinism. Life 1.0 was completely determined by its “hardware” (biology only), but Life 2.0 introduced the ability to rewrite its “software” (biology plus culture). Life 3.0, as Tegmark imagines it, will introduce the ability to go beyond rewriting its software to redesigning its hardware as well, resulting in unlimited adaptability. As he frames it, life proceeds from the biological (1.0) to the cultural (2.0) and eventually to the technological (3.0).

Notice how this frame ignores fundamental differences between organisms and machines, and in so doing alters the definition of life. According to the definition that you’ll find in dictionaries, plants and animals—organic entities—are alive; rocks, steel girders, and machines—inorganic entities—are not alive. A computer program that can spawn a copy of itself and place it on improved hardware might correctly be deemed powerful, but not alive.

Why does Tegmark argue that intelligent machines of the future would constitute life? He gives a hint when he writes,

The question of how to define life is notoriously controversial. Competing definitions abound, some of which include highly specific requirements such as being composed of cells, which might disqualify both future intelligent machines and extraterrestrial civilizations. Since we don’t want to limit our thinking about the future of life to the species that we’ve encountered so far, let’s instead define life very broadly…

Indeed, definitions are often controversial when we scrutinize them deeply. This is because concepts—the boundaries that we create to group and separate things in our efforts to make sense of the world—are always somewhat arbitrary, but these concepts make abstract thinking and communication possible. Responding to the complexity of definitions by excessively broadening them undermines their usefulness.

Tegmarks seems to be concerned that we would only value and embrace future AI if we classified it as living. Contrary to his concern, maintaining our existing definition of life would not prevent us from discovering new forms in the future. We can imagine and could certainly welcome biological organisms that are quite different from those that are already familiar. It is true, however, that keeping the definition of life firmly tied to biology would certainly and appropriately lead us to classify some newly discovered entities as something other than life. This needn’t concern Tegmark, for we value much that isn’t alive and devalue much that is alive. If super-intelligent AIs ever come into existence, we should think of them as similar to us in some ways and different in others, which is sensible, and we should value them to the degree that they are beneficial, not to the degree to which they qualify as life.

You might think that this is much ado about nothing. Why should we care if the definition of life is stretched to include machines? I care for two reasons: 1) in general, we should create and revise definitions more thoughtfully, and 2) specific to AI, we should recognize that machines are different from us in fundamental ways. To the first point, concepts, encapsulated in definitions, form our perceptions, and how we perceive things largely determines the quality of our lives and the utility of our decisions. To the second point, we dare not forget that the interests of a super-intelligent AI would be very different from our own. Recognizing the ways in which these potential machines of the future would be different from us will serve as a critical reminder that we must approach their development with care.

Tegmark states in the title of his book’s first chapter that AI is “the most important conversation of our time.” This is probably not the most important conversation of our time, but it is certainly important. I’m sharing my concerns as a part of this conversation. If we ever manage to equip computers with intelligence that equals our own, their faster processing speeds and greater storage capacities will enable them to rapidly achieve a level of intelligence that leaves us in the dust. That might justify the designation “Intelligence 3.0,” but not “Life 3.0.” I suggest that we frame super-intelligent AI in this manner instead.

Take care,

Only a Summary

January 30th, 2018

While listening to NPR today, I heard a Republican congressman say that we shouldn’t be concerned about the release of sensitive intelligence in the so-called “Nunes Memo,” which alleges abuses by the FBI and Justice Department in their investigation of Russian interference in the presidential election. Why should we not be concerned? Because the Nunes Memo is “just a summary.” When I heard this I let out an involuntary exclamation of exasperation. This congressman is either naïve or intentionally deceitful in this assessment—probably both.

Anyone who works with data knows that summaries are especially subject to bias and manipulation. Even raw data is biased to some degree, but summaries are much more so, for they are highly subjective and interpretive. This congressman argued that no harm could possibly be done by releasing this summary and allowing members of the general public to assess its merits for themselves. It isn’t possible, of course, to evaluate the merits of the summary without examining the source data on which it is based. The source data, however, is being withheld from the public.

It’s ironic that alleged biases exhibited by some members of the intelligence committee are being countered by an obviously biased, politically motivated summary of those biases. The fact that Republicans are refusing to make public an alternative summary of the data that was prepared by Democrats reveals the obvious political motivation behind the action. Republicans in Congress believe that the public is incredibly stupid. I hope they’re wrong, but ignorance about data and the lack of skills that are needed to make sense of it are indeed rife. What a joke that we live in the co-called “information age.” It is probably more accurate to say that we live in the “misinformation age.”

Take care,

Scholarly Peer Reviews Must Involve Experts

January 23rd, 2018

The manner in which scholarly peer reviews are being performed in some settings today is not serving as an effective gatekeeper. Peer review is supposed to filter out invalid or otherwise inadequate work and to encourage good work. Most of the poor work that is being produced could be blocked from publication if the peer review system functioned as intended.

Although some historical instances of peer review can be traced back to the 18th century, it didn’t become a routine and formal part of the scholarly publication process until after World War II. With dramatic increases in the production of scholarly content by the mid-20th century, a means of filtering out inadequate work became imperative.

According to Wikipedia,

Scholarly peer review (also known as refereeing) is the process of subjecting an author’s scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher…decide whether the work should accepted, considered acceptable with revisions, or rejected.

Peer review requires a community of experts in a given (and often narrowly defined) field, who are qualified and able to perform reasonably impartial review.

Scholarly publications, such as academic journals, are only useful if the claims within them are credible. As such, the peer review process performs a vital role. When the process was first established, it was called peer review based on the assumption that those who produced scholarly work were experts in the relevant field. An expert’s peers are other experts. A “community of experts” is essential to peer review.

Over time, in some fields of study, the production of scholarly work has increasingly involved students who are still fairly early in the process of developing expertise. Corresponding with this transition, peer reviewers also increasingly lack expertise. During their advanced studies, it is absolutely useful for students to be involved in research and the production of scholarly work, but this work should not be published based solely on reviews by their peers. Reviews from anyone who’s interested in the subject matter can potentially provide useful feedback to an author, but only reviews by experts can support the objectives of the peer review process.

Characterizing this problem strictly as one that stems from the involvement of students is not entirely accurate. Scholarly work that is submitted for publication is rarely authored by students alone. Almost always, a professor’s name is attached to the work as well. Unfortunately, even if we assume that a professor is an expert in something, we cannot assume expertise in the domain addressed by the work that’s submitted for publication. In my own field of data visualization, many professors who teach courses and do research in data visualization lack expertise in fundamental aspects of the field. For example, it is not uncommon for professors to focus solely on the development of data visualization software with little or no knowledge of the scientific foundations of data visualization theory or actual experience in the practice of data visualization. One of the first times that I became aware of this, much to my surprise, was when the professor who introduced me when I gave a keynote presentation at the Vis Week Conference a decade ago admitted to me privately that he had little knowledge of data visualization best practices.

Do you know how the expertise of peer reviewers is often determined? Those who apply to participate in the process rate themselves. On every occasion when I participated in the process, I completed a questionnaire that asked me to rate my own level of expertise in various domains. There are perhaps exceptions to this self-rating approach—I certainly hope so—but this appears to be typical in the domains of data visualization, human-computer interaction, and even statistics.

Something is amiss in the peer review process. As long as people who lack expertise are deciding which scholarly works to accept or reject for publication, the quality of published work will continue be unreliable. We dare not forget the importance of expertise.

Take care,

Embrace Complexity

January 2nd, 2018

We live in a complex world. Humans are complex. The natural systems that operate in our world are complex. The systems and technologies that we create are increasingly complex. Despite essential complexities, we prefer to see things simply. This preference, although understandable, is becoming ever more dangerous.

I promote simplicity, but a version that strives to explain complex matters simply, without oversimplifying. Healthy simplification attempts to express complex matters without compromising truth. This form of simplicity never dumbs information down. Embracing complexity is hard work. It is not the realm of lazy minds. This hard work is necessary, however, for we can do great harm when we make decisions based on overly simplified representations of complex matters.

We long for a simple world, but that world does not exist. In the early days of our species, we understood relatively little about our world, but enough to give us an evolutionary edge. As our ability to understand the world gradually developed over time, our ability to survive and thrive usually increased with it, but there are exceptions. The world has always been complex, but we are making it even more so. We create technologies that we cannot understand and, therefore, cannot control. This is careless, irresponsible, and downright scary. In fact, given the potential power of many modern technologies, this is suicidal. Nevertheless, we can derive hope from the fact that our brains are much more capable of handling complexity than we routinely demonstrate. This capability, however, can only be developed through hard work that will never be done until we are convinced of its importance and commit to the effort. Even if most people remain lazy, it is critical that those who influence the decisions that establish our path forward are committed to this work.

An entire discipline, called “systems thinking,” has emerged to address complexity. It strives to see systems holistically—how the parts relate to one another in complex ways to produce systems with outcomes that cannot be understood by looking at those parts independently (i.e., analytically). Chances are, you’ve never heard of systems thinking. (In case you’re interested, a wonderful book titled Thinking in Systems by Donella H. Meadows provides a great introduction to the field.)

It is also encouraging that a few organizations have emerged to encourage deeper, more complex thinking. Personally, I appreciate and support the work of The Union of Concerned Scientists and The Center for Inquiry, which both work hard to expose failures in overly simplified thinking. There are also courageous and disciplined thinkers—Richard Dawkins and Robert Reich come immediately to mind—who raise their voices to warn against these errors.

These disciplines, organizations, and individuals challenge us to embrace complexity. In so doing, they challenge us to embrace the only path that will lead to humanity’s survival. We humans are a grand experiment. We’ve accomplished so much in our brief time on this planet. It would be a shame to let laziness and carelessness end the experiment prematurely. I strongly recommend the following technology-related guideline for human survival:

Never create systems or technologies that are so complex that we cannot understand them.

Where there is no understanding, there is no control. When we create situations beyond our control, we put ourselves at risk. Make understanding a priority, even when it is difficult.

Take care,

Beware Incredible Technology-Enabled Futures

December 27th, 2017

Throughout my life, the future has been envisioned and marketed by many as a technology-enabled utopia. As a child growing up in the greater Los Angeles area, my siblings and I were treated to an annual pilgrimage to Disneyland. Originally created for the 1964-65 World’s Fair, Disneyland’s “Carousel of Progress” presented Walt Disney’s awe-inspiring vision of a future enabled through new technologies.

Although I preferred the speedy thrill of a ride on the bobsleds, I found the Carousel of Progress fascinating. It, and one of my favorite cartoons, the Jetsons, inspired me to believe that technological advances might create a utopia in my lifetime.

Many of the technologies featured in these imaginative futures now exist, but our world is hardly a utopia. In fact, new technologies have created many new nightmares.

As the TED (Technology, Entertainment, and Design) Conference has grown from a single annual event in Monterey, California to a worldwide franchise of TEDx conferences, the ideas of a few well-curated speakers have grown into a huge and ever-expanding collection of talks ranging from brilliant to downright nonsense. While thoughtful ideas are still presented in some TED talks, the speakers are no longer vetted with care.

The point of this article is not to critique TED in general, but to expose the absurdities of a new TED talk titled “Three Steps to Surviving the Robot Revolution,” by Charles Radclyffe.


[Radclyffe, on the left, is pictured with the actor Brent Spiner, who played the robot named Data in Star Trek: The Next Generation]

I don’t browse TED talks, so I only become aware of them when someone brings them to my attention. In this case, I received an email from Charles Radclyffe himself, which opened with the sentence, “I’m emailing you with my TEDx talk details as we’ve previously exchanged emails.” As far as I know, Radclyffe and I have never previously exchanged emails. Nevertheless, I was grateful to hear from him because the TED talk that I found caused me great concern.

In his talk, Radclyffe describes how robotics, automation, and artificial intelligence will together usher in a marvelous world if humans are willing to relinquish their jobs. He acknowledges how much people value employment and feel threatened by the “robot revolution,” but argues that, if we were no longer shackled to jobs, we could spend our time in activities that were vastly more meaningful, useful, and fulfilling.

He makes a distinction between work (“anything that you do with intention”) versus a job (“work that you do with the intention of being paid”), and argues that the former is intrinsically valuable but the latter is not. This distinction, however, does not actually eliminate the value of employment, in part because most of us actually do need to earn a living. Radclyffe resolves this dilemma by promoting an incredibly naïve vision of the future. He argues that, if we would only step aside and let machines perform all of the labor that does not absolutely require “human touch” (a term that he doesn’t clarify but suggests is a rather short list), the products and services produced by machines would be free. That’s right—free! Here’s a direct quote:

If you eliminated human labour from the equation of any particular product or service, it would become free. In the past, we were faced with scarcity, but imagine an abundance economy. What little cost that remains would be eliminated by the market and by competition if we could make those industries that were essential for our survival ones with a minimum of human labour…If we encourage and not resist the pace of change, particularly in essential industries, the very goods and services that we all need to survive could be provided for free.

When I heard these words, my jaw hit the floor. “Are you nuts?,” I thought. Radclyffe’s talk is marketing, not a realistic vision of the future. To buy into this, you must accept two premises: 1) products and services can be produced by machines without cost,  and 2) the providers of these products and services will provide them for free. Neither premise is believable. Even if technologies eliminated all costs, which is not the case, the corporations that owned them would never give them away for free.

Radclyffe goes on to argue that, not only could we live more fulfilling lives if the need for human labor were eliminated, but the products and services created by machines, unlike our products and services today, would function ideally. He illustrates this claim with the example of food production. If humans ceased to be involved in food production, we could easily feed the world aplenty using a system of agriculture that was virtually flawless. Given our experience with agribusiness today, can you imagine the huge corporate owners of robotically automated food production, untethered from human labor, using sustainable practices that protected the environment to produce food that provided optimal nutrition for humans? I cannot.

I’ve spent the last 35 years helping people derive value from data using information technologies. I’m responsible for several technological innovations in the field of data visualization. As an experienced technologist who has been doing this for a while, my expectations of technologies are realistic, not pie in the sky visions of bliss. I have a passionate love/hate relationship with technologies. I love them when they’re needed and work well, but I hate them when they’re used to do what we should do ourselves or when they work poorly. Many technologies now exist that we would be better off without and many technologies, especially information technologies, work abysmally. For some strange reason we have learned to give information technologies a pass, tolerating poor quality in these devices that we would never tolerate elsewhere.

Technologies, including robotics, automation, and artificial intelligence, will have an important role to play in our future. They will only play that role well, however, if we approach them thoughtfully and hold them to high standards of ethics and performance. If we ever do create a world that borders on utopia, technologies will no doubt assist, but they will not be the cause. This will come to pass primarily because we’ve progressed as human beings. To progress, we need to look inward and work hard to become our best selves. Technologies will not save us. To survive and flourish, we will need to be our own saviors.

Take care,

There’s Nothing Mere About Semantics

December 13th, 2017

Disagreements and confusion are often characterized as mere matters of semantics. There is nothing “mere” about semantics, however. Differences that are based in semantics can be insidious, for we can differ semantically without even realizing it. It is our shared understanding of word meanings that enables us to communicate. Unfortunately, our failure to define our terms clearly lies at the root of countless misunderstandings and a world of confusion.

Language requires definitions. Definitions and how they vary depending on context are central to semantics. We cannot communicate effectively unless those to whom we speak understand how we define our terms. Even in particular fields of study and practice, such as my field of data visualization, practitioners often fail to define even its core terms in ways that are shared. This leads to failed discussions, a great deal of confusion, and harm to the field.

The term “dashboard” has been one of the most confusing in data visualization since it came into common use about 15 years ago. If you’re familiar with my work, you know that I’ve lamented this problem and worked diligently to resolve it. In 2004, I wrote an article titled “Dashboard Confusion” that offered a working definition of the term. Here’s the definition that appeared in that article:

A dashboard is a visual display of the most important information needed to achieve one or more objectives that has been consolidated on a single computer screen so it can be monitored at a glance.

Over the years, I refined my original definition in various ways to create greater clarity and specificity. In my Dashboard Design course, in addition to the definition above, eventually I began to share the following revised definition as well:

A dashboard is a predominantly visual information display that people use to rapidly monitor current conditions that require a timely response to fulfill a specific role.

Primarily, I revised my original definition to emphasize that the information most in need of a dashboard—a rapid-monitoring display—is that which requires a timely response. Knowing what to display on a dashboard, rather than in other forms of information display, such as monthly reports, is one of the fundamental challenges of dashboard design.

Despite my steadfast efforts to promote clear guidelines for dashboard design, confusion persists because of the diverse and conflicting ways in which people define the term, some of which are downright nonsensical.

When Tableau Software first added the ability to combine multiple charts on a single screen in their product, I encouraged them to call it something other than a dashboard, knowing that doing so would contribute to the confusion. The folks at Tableau couldn’t resist, however, because the term “dashboard” was popular and therefore useful for marketing and sales. Unfortunately, if you call any display that combines multiple charts for whatever reason a dashboard, you can say relatively little about effective design practices. This is because designs, to be effective, must vary significantly based on how and for what purpose the information is used. For example, how we should design a display that’s used for rapidly monitoring—what I call a dashboard—is different in many ways from how we should design a display that’s used for exploratory data analysis.

To illustrate the ongoing prevalence of this problem, we don’t need to look any further than the most recent book of significance that’s been written about dashboards: The Big Book of Dashboards, by Steve Wexler, Jeffrey Shaffer, and Andy Cotgreave. The fact that all three authors are avid users and advocates of Tableau Software is reflected in their definition of a dashboard and in the examples of so-called dashboards that appear in the book. These examples share nothing in common other than the fact that they include multiple charts.

When one of the authors told me about his plans for the book as he and his co-authors were just beginning to collect examples, I strongly advised that they define the term dashboard clearly and only include examples that fit that definition. They did include a definition in the book, but what they came up with did not address my concern. They apparently wanted their definition to describe something in particular—monitoring—but the free-ranging scope of their examples prevented them from doing so exclusively. Given this challenge, they wrote the following definition:

A dashboard is a visual display of data used to monitor conditions and/or facilitate understanding.

Do you see the problem? Stating that a dashboard is used for monitoring conditions is specific. So far, so good. Had they completed the sentence with “and facilitate understanding,” the definition would have remained specific, but they didn’t. The problem is their inclusion of the hybrid conjunction: “and/or.” Because of the “and/or,” according to their definition a dashboard is any visual display whatsoever, so long as it supports monitoring or facilitates understanding. In other words, any display that 1) supports monitoring but doesn’t facilitate understanding, 2) facilitates understanding but doesn’t support monitoring, or 3) both supports monitoring and facilitates understanding, is a dashboard. Monitoring displays, analytical displays, simple lookup reports, even infographics, are all dashboards, as long as they either support monitoring or facilitate understanding. As such, the definition is all-inclusive to the point of uselessness.

Only 2 of the 28 examples of displays that appear in the book qualify as rapid-monitoring displays. The other 26 might be useful for facilitating understanding, but by including displays that share nothing in common except that they are all visual and include multiple charts, the authors undermined their own ability to teach anything that is specific to dashboard design. They provided useful bits of advice in the book, but they also added to the confusion that exists about dashboards and dashboard design.

In all disciplines and all aspects of life, as well, we need clarity in communication. As such, we need clearly defined terms. Using terms loosely creates confusion. It’s not just a matter of semantics. Semantics matter.

Take care,

New Book: Big Data, Big Dupe

December 6th, 2017

I’ve written a new book, titled Big Data, Big Dupe, which will be published on February 1, 2018.

As the title suggests, it is an exposé on Big Data—one that is long overdue. To give you an idea of the content, here’s the text that will appear on the book’s back cover:

Big Data, Big Dupe is a little book about a big bunch of nonsense. The story of David and Goliath inspires us to hope that something little, when armed with truth, can topple something big that is a lie. This is the author’s hope. While others have written about the dangers of Big Data, Stephen Few reveals the deceit that belies its illusory nature. If “data is the new oil,” Big Data is the new snake oil. It isn’t real. It’s a marketing campaign that has distracted us for years from the real and important work of deriving value from data.

Here’s the table of contents:

As you can see, unlike my four other books, this is not about data visualization, but it is definitely relevant to all of us who are involved in data sensemaking. If the nonsense of Big Data is making your work difficult and hurting your organization, this is a book that you might want to leave on the desks of your CEO and CIO. It’s short enough that they might actually read it.

Big Data, Big Dupe is now available for pre-order.

Take care,

Researchers — Share Your Data!

November 13th, 2017

One of the most popular shows in the early years of television was hosted by Art Linkletter, which included a segment called “Kids say the darndest things.” Linkletter would have conversations with young children who could be counted on to say things that adults found entertaining. I’ve experienced my own version of this in recent years that could be described as “Researchers say the darndest things.” My conversations with the authors of data visualization research studies have often featured shocking statements that would be amusing if they weren’t so potentially harmful.

The most recent example occurred in email correspondence with the lead author of a study titled “Evaluating the Impact of Binning 2D Scalar Fields.” I’m currently working on a newsletter article about binned versus continuous color scales in data visualization, so this paper interested me. After reading the paper, however, I had a few questions, so I contacted the author. One of my requests was, “I would like to see the full data set that you collected during the experiment.” Here’s the response that I received from the paper’s author: “In psychology, we do not share data sets but the full analyses are available in the supplementary materials.” You can imagine my shock and dismay. Researchers say the darndest things!

Withholding the data that was collected in a research study—the data on which the published findings and claims were based—subverts the essential nature and goals of science. Published research studies should be accompanied by the data sets on which their findings were based—always. The data should be made readily available to anyone who is interested, just as “supplemental materials” are often made available.

Only good can result from sharing our research data. If we share our data, our results can be confirmed. If we share our data, errors in our work can be identified and corrected. If we share our data, science can progress.

Empirical research is based on data. We make observations, usually in the form of measurements, which serve as the data sets on which our findings are based. Only by reviewing our data can the validity of empirical research be confirmed or denied by the research community. Only by sharing our data can questions about our findings be pursued by those who are interested. Refusing to share our data is the antithesis of science.

The author’s claim that, “In psychology, we do not share our data” is false. Psychology researchers do not have a “Do not share your data” policy. I’m astounded that the author thought that I’d buy this absurd claim. What is true, however, is that, even though there is no policy that research data should not be shared, it usually isn’t. On many occasions this is not an overt act of omission, but a mere act of laziness. The data files that researchers use are often messy and they don’t want the bother of structuring and labeling those files in a manner that would make them useful if shared. On more than one occasion I have requested data files only to be told that it would take too much time to put them into a form that could be shared. This response always makes me wonder if the messiness of those files might have caused the researchers themselves to make errors during their analysis of the data. When I told a respected psychology researcher friend of mine about the “In psychology, we don’t share our data” response that I received from the study’s author, he told me, “In my experience, extreme protectiveness about data tends to correlate with work that is not stellar in quality.” I suspect that this is true.

If you can’t make your research data available, either on some public medium (e.g., accessible as a download from a web page) or upon request, you’d better have a really good excuse. You could try the old standby “My dog ate it,” but it probably won’t work any better than it did when you were in elementary school. If your excuse is, “After doing my analysis and writing my paper, I somehow misplaced the data,” the powers that be (e.g., your university or the publication that made your study public) should respond by saying, “Do it over.”

If I could set the standards for research, I would require that the data be examined during the peer review process. It isn’t necessary that every reviewer examine the data, but at least one who is qualified to detect errors should. Among other potential problems, calculations performed on the data should be checked and it should be determined if statistics have been properly used. Checking the data should be fundamental to the peer review process. If this were done, some of the poor research that wastes our time each year with shoddy work and false claims would remain unpublished. I realize that this would complicate the process. Well, guess what, good research takes time and effort. Doing it well is hard work.

If you want to keep your data private, then do the world a favor and keep your research private as well. It isn’t valid research unless your findings are subject to review, and your findings cannot be fully reviewed without the data.

Take care,

Design with a Purpose in Mind

October 24th, 2017

The merits of something’s design cannot be determined without first understanding the purpose for which it is used and the nature of those who will use it, including their abilities. When looking at the photo below, you no doubt see two poorly designed chairs. The seats are far too low and the backs are far too tall for comfort. Imagine sitting in one of these ill-proportioned chairs.

If these are chairs, they are poorly designed for all but humans of extremely odd proportions, but they are not chairs. Rather, they are kneelers, used for prayer. Here’s a more ornate example:

And here’s one that looks more like those that are typically found in churches:

Not only are we not able to evaluate the merits of something’s design without first understanding its use and users, we cannot design something ourselves without first understanding these things. This is definitely true of data visualizations. We must always begin the design process with questions such as these:

  • For whom is this data visualization being designed?
  • What is the audience’s experience/expertise in viewing data visualizations?
  • What knowledge should the audience acquire when viewing this data visualization?

The point that I’m making should be obvious to anyone who’s involved in data visualization. Sadly, it is not.

Data visualizations should not be designed on whim. Based on the knowledge derived so far from the science of data visualization, if you understand your purpose and audience completely, you can determine the ideal way to design a data visualization. You can only determine this ideal design, however, to the extent that you know the science of data visualization and have developed the skills necessary to apply it. Our knowledge of data visualization best practices will change and improve as the science advances, and when it does our designs will change as well. In the meantime, we should understand the science and apply the practices that it informs with skill. None of us do this perfectly—we make mistakes—but we should strive to do it better with each new attempt. Data visualization is a craft informed by science, not an art driven by creative whim.

Take care,

Eye-Tracking Nonsense from Tableau

October 9th, 2017

Don’t trust everything you read. Surely you know this already. What you might not know is that you should be especially wary when people call what they’ve written a “research study.” I was prompted to issue this warning by a June 29, 2017 entry in Tableau’s blog titled “Eye-tracking study: 5 key learnings for data designers everywhere”. The “study” was done at Tableau Conference 2016 by the Tableau Research and Design team in “real-time…with conference attendees.” If Tableau wishes to call this research, then I must qualify it as bad research. It produced no reliable or useful findings. Rather than a research study, it would be more appropriate to call this “someone having fun with an eye tracker.” (Note: I’m basing my critique of this study solely on the article that appears in Tableau’s blog. I could not find any other information about it and no contact information was provided in the article. I requested contact information by posting a comment in Tableau’s blog in response to the article, but my request was ignored.)

Research studies have a goal in mind—or at least they should. They attempt to learn something useful. According to the article, the goal of this study was to answer the question, “Can we predict where people look when exposed to a dashboard they’ve never seen before?” Furthermore, “Translated into a customer’s voice: how do I, as a data analyst, design visually compelling dashboards?” What is the point of tracking where people look when looking at so-called dashboards (i.e., in Tableau’s terms, any screen that exhibits multiple charts) that they haven’t seen before and have no actual interest in using? None. This is evidenced by the fact that none of the “5 key learnings” are reliable or useful for designing actual dashboards, unless you define a dashboard as an information display that people who have no obvious interest in the data look at once, for no particular purpose. Only attempts to visually compel people to examine and interact with information in ways that lead to useful understanding—that is, in ways that actually inform—are relevant to information designers. What were participants asked to do with the dashboard? According to the article,

We didn’t give participants a task for this Tableau Labs activity, but that doesn’t mean our participants were not goal-directed. Humans are “meaning-making” animals; we can’t stop ourselves from finding a purpose. Every person looking at one of these dashboards had a task, we just didn’t know what it was. Perhaps it was “look at all the crazy stuff people create with Tableau?!”

Despite the speculations above, we actually have a fairly good idea of the task that participants performed, which was to quickly get familiar with an unknown, never-seen-before display. Where someone’s eyes look when seeing a screen of information for the first time is not where their eyes will look when they looking at that screen to ingest and understand information. This is not how research studies are conducted. I shouldn’t have to say this. This is pseudo-science.

When participants at the conference were asked to look at so-called dashboards for the first time, which were not relevant to them, and to do so for an unknown purpose (or lack thereof), what did eye-tracking discover? Here’s a list of the “5 key learnings”:

  1. “(BIG) Numbers matter”
  2. “Repetition fatigue”
  3. “Humans like humans”
  4. “Guide by contrast”
  5. “Form is part of function”

(BIG) Numbers matter

The observations behind the claim that “BIG Numbers matter” was that people tend to look at huge numbers that stand alone on the screen. Actually, people tend to look at anything that is extraordinarily big, not just numbers. In a sea of small numbers, big numbers stand out. What this tells us is actually expressed separately as key learning number 4: “Guide by contrast.” In other words, things that look different from the norm catch our attention. This is not a key learning. This is well known. Here’s the example that appears in the article:

Big Numbers

Each “key learning” was illustrated in the article by a video. In all of the videos, sections of the screen that appear light and therefore visible, as opposed to darkened sections, were sections that received the attention—the lighter the section the more attention it received. The big numbers in this example appear at the top, but even though they appear in the most visually prominent portion of the screen, apparently, they did not garner more attention than the bar graphs that appear in the section below the numbers. If the attention-grabbing character of big numbers was revealed in this study, this particular screen does not provide clear evidence to illustrate this finding.

In response to this claim, we should be asking the question, “Is it useful to draw people’s attention to big numbers on a dashboard?” Typically, it is not, because a number by itself without context provides little information, certainly not enough to fulfill any actual tasks that people might use dashboards to perform. Nevertheless, the research team advises, “If you have an important number, make it big.” I would advise, if you have an important piece of information, express it in a way that not only catches your audience’s attention but does so in a way that is informative.

Repetition fatigue

Apparently, when people look at a dashboard that they’ve never seen before that isn’t relevant to them, and do so for no particular purpose, if the same type of chart appears multiple times, they get bored after they’ve examined the first chart. If you’re not actually trying to understand and use the information on the dashboard but merely scanning it for visual appeal, then yes, you probably won’t bother examining multiple charts that look the same. This isn’t how actual dashboards function, however. People look at dashboards, no matter how you define the term, to learn something, not just for visual entertainment. When you have a goal in mind when examining a dashboard, the kind of “repetition fatigue” that the researchers warn against probably does not come into play.

We should always select the form of display that best suits the data and its use. We should never arbitrarily switch to a different type of chart out of concern that people won’t look at more than one chart of a particular type. Doing so would render the dashboard less effective.

Here’s the example that appears in the article to feature this claim:

Repetition Fatigue

Even if I cared about this dashboard, I might not bother looking at more than one of these particular charts because, from what I can tell, none of them appear to be informative.

Humans like humans

Yes, people are attracted to people. Faces, in particular, grab our attention. According to the article, “if a human or human-like figure is present, it’ll get attention.” And what is the point of this key learning? Unless the human figure itself communicates data in an effective way, placing one on a dashboard adds no value. Also, if the point is to get someone to look at data when it needs attention, you cannot suddenly place human figures on the dashboard to achieve this effect.

Here’s the example that appears in the article:

People Like Humans

This study did not actually demonstrate this claim. It doesn’t indicate that people’s attention is necessarily drawn to human figures in particular. We know that people’s attention is drawn to faces, but this study might not have indicated anything more than the fact that an illustration of any recognizable physical form in the midst of an information display—something that looks quite different from the rest of the dashboard—catches people’s attention.

I’ve seen this particular screen of information before. It presents workers’ compensation information. The human figure functions as a heatmap to show where on the body injuries were occurring. The human figure wasn’t there to attract attention, it was there to convey information. It certainly wasn’t there because “humans like humans.”

Guide by contrast

We’ve known for ages that contrast—either the difference between the background and the foreground or making something appear differently than it usually does—grabs attention, if not overdone. This is not a key learning. Here’s how the finding is described in the article:

Areas of high visual contrast acted as guideposts throughout a dashboard. During the early viewing sequence, the eyes tended to jump from one high contrast element to the next. Almost like a kid’s dot-to-dot drawing, you can use high contrast elements to move visual attention around your dashboard. That being said, it’s notable that high contrast must be used judiciously. If used sparingly, high contrast elements will construct a logical path. Used abundantly, high contrast elements could create a messy and visually overwhelming dashboard.

And here’s the example that was shown to illustrate this:

Scanning Seqeuence

Although it wasn’t explained, I assume that this video displays the sequence of focal points that a single participant exhibited. It certainly does not show the particular sequence of glances that was exhibited by all participants. Even if the researchers explained how to interpret this video, it wouldn’t tell us how to use contrast to lead viewers’ eyes through a dashboard in a particular sequence. Discovering how to do this using contrast would indeed by a key learning.

Form is part of function

The researchers complete their list of learnings with a final bit of information that is well known. Yes, the form that we give an information display contributes to its functionality. Here are the insights that the researchers share with us:

All dashboards have a form (triangular, grid, columnar) and the eyes follow this form. This result was both surprising and not surprising at all. Humans are information seekers: when we look at something for the first time, we want to get information from it. So, we look directly at the information (and don’t look at areas with no information). What’s important to note is the design freedom this gives an author. You don’t need to conform to rules like “put anything important in the upper left hand corner.” Instead, you should be aware of the physical form of your dashboard and use your space accordingly.

Knowing that “form is a part of function” actually tells us the opposite of what the researchers claim. It does not grant us “design freedom” and encourage us to ignore well-known principles and practices of design. Quite the opposite. Understanding how form contributes to function directs us to design information displays in particular ways that are most effective. In other words, it constrains our design choices to those that work. Contrary to the researchers’ statement, placing something that’s always important in the upper left-corner of the dashboard, all else begin equal, is a good practice, for this is where people tend to look first. Ironically, if you review the first four eye-tracking videos that appear in the article, they seem to confirm this. Only the video pictured below is an exception, but this is because nothing whatsoever appears to the left of the centered title.

The example that was provided to illustrate this learning does not clarify it in the least.

Form as Function

The researchers were trying to be provocative, suggesting that we should ignore well-established findings of prior research. After all, how could research done in the past by dedicated scientists compete with this amazing eye-tracking study that was done at a Tableau conference?

The true key learning that we should take from this so-called study is what I led off with: “Don’t trust everything you read.” I know some talented researchers who work for Tableau. This study was not done by them. My guess is that it was done by the marketing department.

Take care,

Signature