Embrace Complexity

January 2nd, 2018

We live in a complex world. Humans are complex. The natural systems that operate in our world are complex. The systems and technologies that we create are increasingly complex. Despite essential complexities, we prefer to see things simply. This preference, although understandable, is becoming ever more dangerous.

I promote simplicity, but a version that strives to explain complex matters simply, without oversimplifying. Healthy simplification attempts to express complex matters without compromising truth. This form of simplicity never dumbs information down. Embracing complexity is hard work. It is not the realm of lazy minds. This hard work is necessary, however, for we can do great harm when we make decisions based on overly simplified representations of complex matters.

We long for a simple world, but that world does not exist. In the early days of our species, we understood relatively little about our world, but enough to give us an evolutionary edge. As our ability to understand the world gradually developed over time, our ability to survive and thrive usually increased with it, but there are exceptions. The world has always been complex, but we are making it even more so. We create technologies that we cannot understand and, therefore, cannot control. This is careless, irresponsible, and downright scary. In fact, given the potential power of many modern technologies, this is suicidal. Nevertheless, we can derive hope from the fact that our brains are much more capable of handling complexity than we routinely demonstrate. This capability, however, can only be developed through hard work that will never be done until we are convinced of its importance and commit to the effort. Even if most people remain lazy, it is critical that those who influence the decisions that establish our path forward are committed to this work.

An entire discipline, called “systems thinking,” has emerged to address complexity. It strives to see systems holistically—how the parts relate to one another in complex ways to produce systems with outcomes that cannot be understood by looking at those parts independently (i.e., analytically). Chances are, you’ve never heard of systems thinking. (In case you’re interested, a wonderful book titled Thinking in Systems by Donella H. Meadows provides a great introduction to the field.)

It is also encouraging that a few organizations have emerged to encourage deeper, more complex thinking. Personally, I appreciate and support the work of The Union of Concerned Scientists and The Center for Inquiry, which both work hard to expose failures in overly simplified thinking. There are also courageous and disciplined thinkers—Richard Dawkins and Robert Reich come immediately to mind—who raise their voices to warn against these errors.

These disciplines, organizations, and individuals challenge us to embrace complexity. In so doing, they challenge us to embrace the only path that will lead to humanity’s survival. We humans are a grand experiment. We’ve accomplished so much in our brief time on this planet. It would be a shame to let laziness and carelessness end the experiment prematurely. I strongly recommend the following technology-related guideline for human survival:

Never create systems or technologies that are so complex that we cannot understand them.

Where there is no understanding, there is no control. When we create situations beyond our control, we put ourselves at risk. Make understanding a priority, even when it is difficult.

Take care,

Beware Incredible Technology-Enabled Futures

December 27th, 2017

Throughout my life, the future has been envisioned and marketed by many as a technology-enabled utopia. As a child growing up in the greater Los Angeles area, my siblings and I were treated to an annual pilgrimage to Disneyland. Originally created for the 1964-65 World’s Fair, Disneyland’s “Carousel of Progress” presented Walt Disney’s awe-inspiring vision of a future enabled through new technologies.

Although I preferred the speedy thrill of a ride on the bobsleds, I found the Carousel of Progress fascinating. It, and one of my favorite cartoons, the Jetsons, inspired me to believe that technological advances might create a utopia in my lifetime.

Many of the technologies featured in these imaginative futures now exist, but our world is hardly a utopia. In fact, new technologies have created many new nightmares.

As the TED (Technology, Entertainment, and Design) Conference has grown from a single annual event in Monterey, California to a worldwide franchise of TEDx conferences, the ideas of a few well-curated speakers have grown into a huge and ever-expanding collection of talks ranging from brilliant to downright nonsense. While thoughtful ideas are still presented in some TED talks, the speakers are no longer vetted with care.

The point of this article is not to critique TED in general, but to expose the absurdities of a new TED talk titled “Three Steps to Surviving the Robot Revolution,” by Charles Radclyffe.


[Radclyffe, on the left, is pictured with the actor Brent Spiner, who played the robot named Data in Star Trek: The Next Generation]

I don’t browse TED talks, so I only become aware of them when someone brings them to my attention. In this case, I received an email from Charles Radclyffe himself, which opened with the sentence, “I’m emailing you with my TEDx talk details as we’ve previously exchanged emails.” As far as I know, Radclyffe and I have never previously exchanged emails. Nevertheless, I was grateful to hear from him because the TED talk that I found caused me great concern.

In his talk, Radclyffe describes how robotics, automation, and artificial intelligence will together usher in a marvelous world if humans are willing to relinquish their jobs. He acknowledges how much people value employment and feel threatened by the “robot revolution,” but argues that, if we were no longer shackled to jobs, we could spend our time in activities that were vastly more meaningful, useful, and fulfilling.

He makes a distinction between work (“anything that you do with intention”) versus a job (“work that you do with the intention of being paid”), and argues that the former is intrinsically valuable but the latter is not. This distinction, however, does not actually eliminate the value of employment, in part because most of us actually do need to earn a living. Radclyffe resolves this dilemma by promoting an incredibly naïve vision of the future. He argues that, if we would only step aside and let machines perform all of the labor that does not absolutely require “human touch” (a term that he doesn’t clarify but suggests is a rather short list), the products and services produced by machines would be free. That’s right—free! Here’s a direct quote:

If you eliminated human labour from the equation of any particular product or service, it would become free. In the past, we were faced with scarcity, but imagine an abundance economy. What little cost that remains would be eliminated by the market and by competition if we could make those industries that were essential for our survival ones with a minimum of human labour…If we encourage and not resist the pace of change, particularly in essential industries, the very goods and services that we all need to survive could be provided for free.

When I heard these words, my jaw hit the floor. “Are you nuts?,” I thought. Radclyffe’s talk is marketing, not a realistic vision of the future. To buy into this, you must accept two premises: 1) products and services can be produced by machines without cost,  and 2) the providers of these products and services will provide them for free. Neither premise is believable. Even if technologies eliminated all costs, which is not the case, the corporations that owned them would never give them away for free.

Radclyffe goes on to argue that, not only could we live more fulfilling lives if the need for human labor were eliminated, but the products and services created by machines, unlike our products and services today, would function ideally. He illustrates this claim with the example of food production. If humans ceased to be involved in food production, we could easily feed the world aplenty using a system of agriculture that was virtually flawless. Given our experience with agribusiness today, can you imagine the huge corporate owners of robotically automated food production, untethered from human labor, using sustainable practices that protected the environment to produce food that provided optimal nutrition for humans? I cannot.

I’ve spent the last 35 years helping people derive value from data using information technologies. I’m responsible for several technological innovations in the field of data visualization. As an experienced technologist who has been doing this for a while, my expectations of technologies are realistic, not pie in the sky visions of bliss. I have a passionate love/hate relationship with technologies. I love them when they’re needed and work well, but I hate them when they’re used to do what we should do ourselves or when they work poorly. Many technologies now exist that we would be better off without and many technologies, especially information technologies, work abysmally. For some strange reason we have learned to give information technologies a pass, tolerating poor quality in these devices that we would never tolerate elsewhere.

Technologies, including robotics, automation, and artificial intelligence, will have an important role to play in our future. They will only play that role well, however, if we approach them thoughtfully and hold them to high standards of ethics and performance. If we ever do create a world that borders on utopia, technologies will no doubt assist, but they will not be the cause. This will come to pass primarily because we’ve progressed as human beings. To progress, we need to look inward and work hard to become our best selves. Technologies will not save us. To survive and flourish, we will need to be our own saviors.

Take care,

There’s Nothing Mere About Semantics

December 13th, 2017

Disagreements and confusion are often characterized as mere matters of semantics. There is nothing “mere” about semantics, however. Differences that are based in semantics can be insidious, for we can differ semantically without even realizing it. It is our shared understanding of word meanings that enables us to communicate. Unfortunately, our failure to define our terms clearly lies at the root of countless misunderstandings and a world of confusion.

Language requires definitions. Definitions and how they vary depending on context are central to semantics. We cannot communicate effectively unless those to whom we speak understand how we define our terms. Even in particular fields of study and practice, such as my field of data visualization, practitioners often fail to define even its core terms in ways that are shared. This leads to failed discussions, a great deal of confusion, and harm to the field.

The term “dashboard” has been one of the most confusing in data visualization since it came into common use about 15 years ago. If you’re familiar with my work, you know that I’ve lamented this problem and worked diligently to resolve it. In 2004, I wrote an article titled “Dashboard Confusion” that offered a working definition of the term. Here’s the definition that appeared in that article:

A dashboard is a visual display of the most important information needed to achieve one or more objectives that has been consolidated on a single computer screen so it can be monitored at a glance.

Over the years, I refined my original definition in various ways to create greater clarity and specificity. In my Dashboard Design course, in addition to the definition above, eventually I began to share the following revised definition as well:

A dashboard is a predominantly visual information display that people use to rapidly monitor current conditions that require a timely response to fulfill a specific role.

Primarily, I revised my original definition to emphasize that the information most in need of a dashboard—a rapid-monitoring display—is that which requires a timely response. Knowing what to display on a dashboard, rather than in other forms of information display, such as monthly reports, is one of the fundamental challenges of dashboard design.

Despite my steadfast efforts to promote clear guidelines for dashboard design, confusion persists because of the diverse and conflicting ways in which people define the term, some of which are downright nonsensical.

When Tableau Software first added the ability to combine multiple charts on a single screen in their product, I encouraged them to call it something other than a dashboard, knowing that doing so would contribute to the confusion. The folks at Tableau couldn’t resist, however, because the term “dashboard” was popular and therefore useful for marketing and sales. Unfortunately, if you call any display that combines multiple charts for whatever reason a dashboard, you can say relatively little about effective design practices. This is because designs, to be effective, must vary significantly based on how and for what purpose the information is used. For example, how we should design a display that’s used for rapidly monitoring—what I call a dashboard—is different in many ways from how we should design a display that’s used for exploratory data analysis.

To illustrate the ongoing prevalence of this problem, we don’t need to look any further than the most recent book of significance that’s been written about dashboards: The Big Book of Dashboards, by Steve Wexler, Jeffrey Shaffer, and Andy Cotgreave. The fact that all three authors are avid users and advocates of Tableau Software is reflected in their definition of a dashboard and in the examples of so-called dashboards that appear in the book. These examples share nothing in common other than the fact that they include multiple charts.

When one of the authors told me about his plans for the book as he and his co-authors were just beginning to collect examples, I strongly advised that they define the term dashboard clearly and only include examples that fit that definition. They did include a definition in the book, but what they came up with did not address my concern. They apparently wanted their definition to describe something in particular—monitoring—but the free-ranging scope of their examples prevented them from doing so exclusively. Given this challenge, they wrote the following definition:

A dashboard is a visual display of data used to monitor conditions and/or facilitate understanding.

Do you see the problem? Stating that a dashboard is used for monitoring conditions is specific. So far, so good. Had they completed the sentence with “and facilitate understanding,” the definition would have remained specific, but they didn’t. The problem is their inclusion of the hybrid conjunction: “and/or.” Because of the “and/or,” according to their definition a dashboard is any visual display whatsoever, so long as it supports monitoring or facilitates understanding. In other words, any display that 1) supports monitoring but doesn’t facilitate understanding, 2) facilitates understanding but doesn’t support monitoring, or 3) both supports monitoring and facilitates understanding, is a dashboard. Monitoring displays, analytical displays, simple lookup reports, even infographics, are all dashboards, as long as they either support monitoring or facilitate understanding. As such, the definition is all-inclusive to the point of uselessness.

Only 2 of the 28 examples of displays that appear in the book qualify as rapid-monitoring displays. The other 26 might be useful for facilitating understanding, but by including displays that share nothing in common except that they are all visual and include multiple charts, the authors undermined their own ability to teach anything that is specific to dashboard design. They provided useful bits of advice in the book, but they also added to the confusion that exists about dashboards and dashboard design.

In all disciplines and all aspects of life, as well, we need clarity in communication. As such, we need clearly defined terms. Using terms loosely creates confusion. It’s not just a matter of semantics. Semantics matter.

Take care,

New Book: Big Data, Big Dupe

December 6th, 2017

I’ve written a new book, titled Big Data, Big Dupe, which will be published on February 1, 2018.

As the title suggests, it is an exposé on Big Data—one that is long overdue. To give you an idea of the content, here’s the text that will appear on the book’s back cover:

Big Data, Big Dupe is a little book about a big bunch of nonsense. The story of David and Goliath inspires us to hope that something little, when armed with truth, can topple something big that is a lie. This is the author’s hope. While others have written about the dangers of Big Data, Stephen Few reveals the deceit that belies its illusory nature. If “data is the new oil,” Big Data is the new snake oil. It isn’t real. It’s a marketing campaign that has distracted us for years from the real and important work of deriving value from data.

Here’s the table of contents:

As you can see, unlike my four other books, this is not about data visualization, but it is definitely relevant to all of us who are involved in data sensemaking. If the nonsense of Big Data is making your work difficult and hurting your organization, this is a book that you might want to leave on the desks of your CEO and CIO. It’s short enough that they might actually read it.

Big Data, Big Dupe is now available for pre-order.

Take care,

Researchers — Share Your Data!

November 13th, 2017

One of the most popular shows in the early years of television was hosted by Art Linkletter, which included a segment called “Kids say the darndest things.” Linkletter would have conversations with young children who could be counted on to say things that adults found entertaining. I’ve experienced my own version of this in recent years that could be described as “Researchers say the darndest things.” My conversations with the authors of data visualization research studies have often featured shocking statements that would be amusing if they weren’t so potentially harmful.

The most recent example occurred in email correspondence with the lead author of a study titled “Evaluating the Impact of Binning 2D Scalar Fields.” I’m currently working on a newsletter article about binned versus continuous color scales in data visualization, so this paper interested me. After reading the paper, however, I had a few questions, so I contacted the author. One of my requests was, “I would like to see the full data set that you collected during the experiment.” Here’s the response that I received from the paper’s author: “In psychology, we do not share data sets but the full analyses are available in the supplementary materials.” You can imagine my shock and dismay. Researchers say the darndest things!

Withholding the data that was collected in a research study—the data on which the published findings and claims were based—subverts the essential nature and goals of science. Published research studies should be accompanied by the data sets on which their findings were based—always. The data should be made readily available to anyone who is interested, just as “supplemental materials” are often made available.

Only good can result from sharing our research data. If we share our data, our results can be confirmed. If we share our data, errors in our work can be identified and corrected. If we share our data, science can progress.

Empirical research is based on data. We make observations, usually in the form of measurements, which serve as the data sets on which our findings are based. Only by reviewing our data can the validity of empirical research be confirmed or denied by the research community. Only by sharing our data can questions about our findings be pursued by those who are interested. Refusing to share our data is the antithesis of science.

The author’s claim that, “In psychology, we do not share our data” is false. Psychology researchers do not have a “Do not share your data” policy. I’m astounded that the author thought that I’d buy this absurd claim. What is true, however, is that, even though there is no policy that research data should not be shared, it usually isn’t. On many occasions this is not an overt act of omission, but a mere act of laziness. The data files that researchers use are often messy and they don’t want the bother of structuring and labeling those files in a manner that would make them useful if shared. On more than one occasion I have requested data files only to be told that it would take too much time to put them into a form that could be shared. This response always makes me wonder if the messiness of those files might have caused the researchers themselves to make errors during their analysis of the data. When I told a respected psychology researcher friend of mine about the “In psychology, we don’t share our data” response that I received from the study’s author, he told me, “In my experience, extreme protectiveness about data tends to correlate with work that is not stellar in quality.” I suspect that this is true.

If you can’t make your research data available, either on some public medium (e.g., accessible as a download from a web page) or upon request, you’d better have a really good excuse. You could try the old standby “My dog ate it,” but it probably won’t work any better than it did when you were in elementary school. If your excuse is, “After doing my analysis and writing my paper, I somehow misplaced the data,” the powers that be (e.g., your university or the publication that made your study public) should respond by saying, “Do it over.”

If I could set the standards for research, I would require that the data be examined during the peer review process. It isn’t necessary that every reviewer examine the data, but at least one who is qualified to detect errors should. Among other potential problems, calculations performed on the data should be checked and it should be determined if statistics have been properly used. Checking the data should be fundamental to the peer review process. If this were done, some of the poor research that wastes our time each year with shoddy work and false claims would remain unpublished. I realize that this would complicate the process. Well, guess what, good research takes time and effort. Doing it well is hard work.

If you want to keep your data private, then do the world a favor and keep your research private as well. It isn’t valid research unless your findings are subject to review, and your findings cannot be fully reviewed without the data.

Take care,

Design with a Purpose in Mind

October 24th, 2017

The merits of something’s design cannot be determined without first understanding the purpose for which it is used and the nature of those who will use it, including their abilities. When looking at the photo below, you no doubt see two poorly designed chairs. The seats are far too low and the backs are far too tall for comfort. Imagine sitting in one of these ill-proportioned chairs.

If these are chairs, they are poorly designed for all but humans of extremely odd proportions, but they are not chairs. Rather, they are kneelers, used for prayer. Here’s a more ornate example:

And here’s one that looks more like those that are typically found in churches:

Not only are we not able to evaluate the merits of something’s design without first understanding its use and users, we cannot design something ourselves without first understanding these things. This is definitely true of data visualizations. We must always begin the design process with questions such as these:

  • For whom is this data visualization being designed?
  • What is the audience’s experience/expertise in viewing data visualizations?
  • What knowledge should the audience acquire when viewing this data visualization?

The point that I’m making should be obvious to anyone who’s involved in data visualization. Sadly, it is not.

Data visualizations should not be designed on whim. Based on the knowledge derived so far from the science of data visualization, if you understand your purpose and audience completely, you can determine the ideal way to design a data visualization. You can only determine this ideal design, however, to the extent that you know the science of data visualization and have developed the skills necessary to apply it. Our knowledge of data visualization best practices will change and improve as the science advances, and when it does our designs will change as well. In the meantime, we should understand the science and apply the practices that it informs with skill. None of us do this perfectly—we make mistakes—but we should strive to do it better with each new attempt. Data visualization is a craft informed by science, not an art driven by creative whim.

Take care,

Eye-Tracking Nonsense from Tableau

October 9th, 2017

Don’t trust everything you read. Surely you know this already. What you might not know is that you should be especially wary when people call what they’ve written a “research study.” I was prompted to issue this warning by a June 29, 2017 entry in Tableau’s blog titled “Eye-tracking study: 5 key learnings for data designers everywhere”. The “study” was done at Tableau Conference 2016 by the Tableau Research and Design team in “real-time…with conference attendees.” If Tableau wishes to call this research, then I must qualify it as bad research. It produced no reliable or useful findings. Rather than a research study, it would be more appropriate to call this “someone having fun with an eye tracker.” (Note: I’m basing my critique of this study solely on the article that appears in Tableau’s blog. I could not find any other information about it and no contact information was provided in the article. I requested contact information by posting a comment in Tableau’s blog in response to the article, but my request was ignored.)

Research studies have a goal in mind—or at least they should. They attempt to learn something useful. According to the article, the goal of this study was to answer the question, “Can we predict where people look when exposed to a dashboard they’ve never seen before?” Furthermore, “Translated into a customer’s voice: how do I, as a data analyst, design visually compelling dashboards?” What is the point of tracking where people look when looking at so-called dashboards (i.e., in Tableau’s terms, any screen that exhibits multiple charts) that they haven’t seen before and have no actual interest in using? None. This is evidenced by the fact that none of the “5 key learnings” are reliable or useful for designing actual dashboards, unless you define a dashboard as an information display that people who have no obvious interest in the data look at once, for no particular purpose. Only attempts to visually compel people to examine and interact with information in ways that lead to useful understanding—that is, in ways that actually inform—are relevant to information designers. What were participants asked to do with the dashboard? According to the article,

We didn’t give participants a task for this Tableau Labs activity, but that doesn’t mean our participants were not goal-directed. Humans are “meaning-making” animals; we can’t stop ourselves from finding a purpose. Every person looking at one of these dashboards had a task, we just didn’t know what it was. Perhaps it was “look at all the crazy stuff people create with Tableau?!”

Despite the speculations above, we actually have a fairly good idea of the task that participants performed, which was to quickly get familiar with an unknown, never-seen-before display. Where someone’s eyes look when seeing a screen of information for the first time is not where their eyes will look when they looking at that screen to ingest and understand information. This is not how research studies are conducted. I shouldn’t have to say this. This is pseudo-science.

When participants at the conference were asked to look at so-called dashboards for the first time, which were not relevant to them, and to do so for an unknown purpose (or lack thereof), what did eye-tracking discover? Here’s a list of the “5 key learnings”:

  1. “(BIG) Numbers matter”
  2. “Repetition fatigue”
  3. “Humans like humans”
  4. “Guide by contrast”
  5. “Form is part of function”

(BIG) Numbers matter

The observations behind the claim that “BIG Numbers matter” was that people tend to look at huge numbers that stand alone on the screen. Actually, people tend to look at anything that is extraordinarily big, not just numbers. In a sea of small numbers, big numbers stand out. What this tells us is actually expressed separately as key learning number 4: “Guide by contrast.” In other words, things that look different from the norm catch our attention. This is not a key learning. This is well known. Here’s the example that appears in the article:

Big Numbers

Each “key learning” was illustrated in the article by a video. In all of the videos, sections of the screen that appear light and therefore visible, as opposed to darkened sections, were sections that received the attention—the lighter the section the more attention it received. The big numbers in this example appear at the top, but even though they appear in the most visually prominent portion of the screen, apparently, they did not garner more attention than the bar graphs that appear in the section below the numbers. If the attention-grabbing character of big numbers was revealed in this study, this particular screen does not provide clear evidence to illustrate this finding.

In response to this claim, we should be asking the question, “Is it useful to draw people’s attention to big numbers on a dashboard?” Typically, it is not, because a number by itself without context provides little information, certainly not enough to fulfill any actual tasks that people might use dashboards to perform. Nevertheless, the research team advises, “If you have an important number, make it big.” I would advise, if you have an important piece of information, express it in a way that not only catches your audience’s attention but does so in a way that is informative.

Repetition fatigue

Apparently, when people look at a dashboard that they’ve never seen before that isn’t relevant to them, and do so for no particular purpose, if the same type of chart appears multiple times, they get bored after they’ve examined the first chart. If you’re not actually trying to understand and use the information on the dashboard but merely scanning it for visual appeal, then yes, you probably won’t bother examining multiple charts that look the same. This isn’t how actual dashboards function, however. People look at dashboards, no matter how you define the term, to learn something, not just for visual entertainment. When you have a goal in mind when examining a dashboard, the kind of “repetition fatigue” that the researchers warn against probably does not come into play.

We should always select the form of display that best suits the data and its use. We should never arbitrarily switch to a different type of chart out of concern that people won’t look at more than one chart of a particular type. Doing so would render the dashboard less effective.

Here’s the example that appears in the article to feature this claim:

Repetition Fatigue

Even if I cared about this dashboard, I might not bother looking at more than one of these particular charts because, from what I can tell, none of them appear to be informative.

Humans like humans

Yes, people are attracted to people. Faces, in particular, grab our attention. According to the article, “if a human or human-like figure is present, it’ll get attention.” And what is the point of this key learning? Unless the human figure itself communicates data in an effective way, placing one on a dashboard adds no value. Also, if the point is to get someone to look at data when it needs attention, you cannot suddenly place human figures on the dashboard to achieve this effect.

Here’s the example that appears in the article:

People Like Humans

This study did not actually demonstrate this claim. It doesn’t indicate that people’s attention is necessarily drawn to human figures in particular. We know that people’s attention is drawn to faces, but this study might not have indicated anything more than the fact that an illustration of any recognizable physical form in the midst of an information display—something that looks quite different from the rest of the dashboard—catches people’s attention.

I’ve seen this particular screen of information before. It presents workers’ compensation information. The human figure functions as a heatmap to show where on the body injuries were occurring. The human figure wasn’t there to attract attention, it was there to convey information. It certainly wasn’t there because “humans like humans.”

Guide by contrast

We’ve known for ages that contrast—either the difference between the background and the foreground or making something appear differently than it usually does—grabs attention, if not overdone. This is not a key learning. Here’s how the finding is described in the article:

Areas of high visual contrast acted as guideposts throughout a dashboard. During the early viewing sequence, the eyes tended to jump from one high contrast element to the next. Almost like a kid’s dot-to-dot drawing, you can use high contrast elements to move visual attention around your dashboard. That being said, it’s notable that high contrast must be used judiciously. If used sparingly, high contrast elements will construct a logical path. Used abundantly, high contrast elements could create a messy and visually overwhelming dashboard.

And here’s the example that was shown to illustrate this:

Scanning Seqeuence

Although it wasn’t explained, I assume that this video displays the sequence of focal points that a single participant exhibited. It certainly does not show the particular sequence of glances that was exhibited by all participants. Even if the researchers explained how to interpret this video, it wouldn’t tell us how to use contrast to lead viewers’ eyes through a dashboard in a particular sequence. Discovering how to do this using contrast would indeed by a key learning.

Form is part of function

The researchers complete their list of learnings with a final bit of information that is well known. Yes, the form that we give an information display contributes to its functionality. Here are the insights that the researchers share with us:

All dashboards have a form (triangular, grid, columnar) and the eyes follow this form. This result was both surprising and not surprising at all. Humans are information seekers: when we look at something for the first time, we want to get information from it. So, we look directly at the information (and don’t look at areas with no information). What’s important to note is the design freedom this gives an author. You don’t need to conform to rules like “put anything important in the upper left hand corner.” Instead, you should be aware of the physical form of your dashboard and use your space accordingly.

Knowing that “form is a part of function” actually tells us the opposite of what the researchers claim. It does not grant us “design freedom” and encourage us to ignore well-known principles and practices of design. Quite the opposite. Understanding how form contributes to function directs us to design information displays in particular ways that are most effective. In other words, it constrains our design choices to those that work. Contrary to the researchers’ statement, placing something that’s always important in the upper left-corner of the dashboard, all else begin equal, is a good practice, for this is where people tend to look first. Ironically, if you review the first four eye-tracking videos that appear in the article, they seem to confirm this. Only the video pictured below is an exception, but this is because nothing whatsoever appears to the left of the centered title.

The example that was provided to illustrate this learning does not clarify it in the least.

Form as Function

The researchers were trying to be provocative, suggesting that we should ignore well-established findings of prior research. After all, how could research done in the past by dedicated scientists compete with this amazing eye-tracking study that was done at a Tableau conference?

The true key learning that we should take from this so-called study is what I led off with: “Don’t trust everything you read.” I know some talented researchers who work for Tableau. This study was not done by them. My guess is that it was done by the marketing department.

Take care,

Signature

Data Communicators – People Who Aren’t Interested and Don’t Care Are Not Your Audience

October 3rd, 2017

This week, I am enjoying the pleasure of my friend Alberto Cairo’s company. Alberto traveled to Portland, Oregon to speak for two events and I’m serving as innkeeper and chauffeur while he’s here. Last night an interesting topic arose over dinner. Several interesting topics, actually, but I’d like to share one in particular. Alberto and I both found ourselves bemoaning the assumption of too many data communicators that their audience isn’t interested in the data. This assumption leads to a great deal of poorly designed data displays.

The particular example that prompted our discussion was the assumption that people are unwilling to read brief instructions that explain how to interpret a chart. This assumption leads many data communicators to present data in ways that aren’t particularly informative out of concern that the better form of display would require a bit of instruction. What a travesty!

When we prepare data communications, we should almost always design them for people who are interested in the data. Dumbing the information down or adding entertaining effects that make the data difficult to interpret or comprehend is never justified.

Over the years I have had many debates with people who defend severe compromises in design effectiveness because they believe that their audience must, above and before all, be entertained. There is a place for entertainment. I incorporate a great deal of humor in my classes and lectures. I do so, however, in ways that don’t detract from the learning experience by compromising the content. Humor, used skillfully, can enhance the learning experience. Similarly, data can be displayed in visually engaging ways that enhance the degree to which the data informs, but this requires skill. Merely dressing up the data or adding meaningless and distracting visual effects requires no skill whatsoever, and it results in harm.

Personally, I have never assumed that my audience wasn’t interested in the data that I was presenting to them. I wouldn’t bother presenting data to people who weren’t interested and didn’t care. What would be the point? I match the content of my communications to the needs and interests of the audience. I don’t speak to audiences who lack needs and interests that I’m well-suited to address.

When we present information to people who are interested in it, we can focus on communicating as clearly, accurately, and fully as possible. If you have something to communicate that people care about, you are responsible for doing it well. If your audience isn’t interested in data that you’re communicating, perhaps you have the wrong audience.

Take care,

Signature

Data Is Not Beautiful

August 16th, 2017

Despite the rhetoric of recent years, data is neither beautiful nor ugly. Data is data; it merely describes what is and has no aesthetic dimension. The world that’s revealed in data can be breathtakingly beautiful or soul-crushingly ugly, but data itself is neither.

We can respond to data in ways that create beauty, justice, and wellbeing. We can do this, in part, both through data visualization and data art. Though data visualization and data art are constructed from the same raw materials (i.e., data), their methods differ. What does not differ, however, is their ultimate purpose to present or evoke meaning. When I visualize data, I do it to bring specific meanings to light or to make it possible for others to do that on their own. Similarly, when skilled data artists express data, they do it to evoke a meaningful experience. Even if the data artist’s meaning is less specific than mine as a data visualizer, the artist intends for the viewer to experience meaning and often emotion as well.

I appreciate good data art just as I appreciate good art of all types. What I cannot stomach is meaningless visual drivel that calls itself data art or, even worse, calls itself data visualization. I stridently object to the work of lazy, unskilled creators of meaningless, difficult to read, or misleading data displays. I’m referring to visualizations that fail to display data in ways that promote clear and true understanding. Many data visualizations that are labeled “beautiful” are anything but. Instead, they pander to the base interests of those who seek superficial, effortless pleasure rather than understanding, which always involves effort. There might be occasions when meaningless pleasure is useful, but not when data is being displayed. Data can potentially inform. We should never squander this potential.

Take care,

Signature

Something Going Up Is Not Always Good

August 7th, 2017

Even though our unique ability to deal with complexity propelled humans to the top of the evolutionary heap, we still crave simplistic (i.e., overly simple) explanations. I promote the value of simplicity in my work, but never simplicity that sacrifices truth. Simple things can and should be explained simply. Complex things can and should be explained as simply as possible, but never in a way that disregards or misrepresents their complexity.

When people hold simplistic assumptions about data, we should educate them, not accommodate their ignorance. One such assumption is that, in a time series, values going up are always good and values going down are always bad. I find it odd that people tend to interpret data in this manner, because no one interprets life in this manner. While we consider it good when our incomes go up or our health improves, we have no trouble recognizing that the cost of food going up or increases in suffering are bad. Why would we interpret data in this naive manner?

How do you deal with the commonplace exceptions to the “going up is good assumption,” such as the variance between actual and budgeted expenses? When considering expenses, being over budget is usually considered bad. Through the years of teaching data visualization courses, participants in my classes have often suggested that this assumption should be accommodated by reversing the quantitative scale, placing the negative values (i.e., under budget) above and the positive values (i.e., over budget) below. Is this an appropriate solution? Representing negative values as going up creates a new source of confusion, and does so unnecessarily.

Rather than accommodating ignorance by twisting data into awkward arrangements, why not correct the error instead? It is easy to explain that things going up aren’t always good in a way that everyone can understand. When specific cases of ignorance can be banished so quickly, easily, and permanently, why perpetuate it?

Data sensemaking and communication fundamentally seek to replace ignorance with understanding. Everything that we do in this venture should be done with this in mind. When we accommodate ignorance, we condone and encourage it. Doing so undermines the integrity of our work and the outcomes that we should be working hard to achieve.

Take care,

Signature