Information Design Watch
December 14, 2011, 2:06 pm
By Henry Woodbury
(via Ann Althouse)
September 3, 2011, 10:25 pm
By Henry Woodbury
1. Industrial designer Dieter Ram’s work for Braun is highlighted in a portfolio that purports to describe 10 principles of modern design. It is an honest appraisal. It includes the idiotic geared mixer.
2. Blogger Ann Althouse reduces the reductive aesthetic:
Oddly, I came away feeling that the 10 principles were all the same, and if that principle was simple functionality, the make that one thing into 10 is a violation of the principle itself. But then Rams wasn’t purporting to dictate the principles of website content, so there really is no paradox.
3. Could you have one principle with ten examples and still get the page-views? Lists are so addictive.
July 14, 2011, 10:23 pm
By Henry Woodbury
Scale is a kind of beauty. Here Kai Krause maps out the scale of the continent of Africa in comparison to a selection of the usual suspects:
Click through for full-size map, more data, and editorial content (whose thesis I find entirely unconvincing).
I’m more intrigued by the effectiveness of the visualization as an informational device. The juxtaposition is what matters, not the “true size”. If you mapped the true size of Canada, the United States, Mexico, and Central America against the continent of North America the result would be entirely pointless.
What makes Krause’s map intriguing is the contrast between large countries and a continent comprised mostly of small ones. To make a North American map of equivalent interest I would replace the large land masses of Canada, the United States, and Mexico with numerous small countries (to reverse the conceit we could replace Central America with Madagascar — a number of small countries with one large). Thus, we learn about the size of the selected countries as well as the size of the continent.
February 6, 2011, 8:34 pm
By Henry Woodbury
Given multiple documents, readers will make more judgments based on typography as they find it harder to make judgments based on substance.
On one level this is pretty reductive. A situation where all other considerations are equal except typography (or design, for that matter) never exists. But just because a reader starts reading an article or brief doesn’t mean the reader will finish it. Butterick writes:
I believe that most readers are looking for reasons to stop reading. Not because they’re malicious or aloof. They’re just being rational. If readers have other demands on their time, why should they pay any more attention than they absolutely must? Readers are always looking for the exit.
It’s an information design problem: How do you move a reader along in the flow?
Next question: Is legal size really necessary?
June 3, 2010, 11:08 am
By Henry Woodbury
Last week I blogged about a Harvard Business Review article on the inherent biases in visualization. Visual information makes people overconfident of outcomes.
Today the New York Times offers a perfect example. In the debate around U.S. health care overhaul, the president’s budget director Peter Orszag argued that savings could be found by reforming the current system:
Mr Orszag displayed maps produced by Dartmouth researchers that appeared to show where the waste in the system could be found. Beige meant hospitals and regions that offered good, efficient care; chocolate meant bad and inefficient.
The maps made reform seem relatively easy to many in Congress, some of whom demanded the administration simply trim the money Medicare pays to hospitals and doctors in the brown zones. The administration promised to seriously consider doing just that. [my emphasis]
Unfortunately, the maps don’t show what they seem to show. While they show cost of care (a very specific kind of care it should be noted), they don’t show quality of care. Nor do the maps show anything about the demographics of the patients being cared for.
The Times compares the Dartmouth map (on the left) to Medicare’s own analysis of hospital quality (on the right) to show the disconnect. However, the Medicare map raises questions of its own. To start with, it shows a suspicious correspondence to U.S. population density.
Perhaps quality of care relates to the proposition that higher population density creates demand for more specialists which leads to better diagnoses. I’m sure I’m not the first person to think of this. Before anyone draws another map, let’s work on better analysis.
May 27, 2010, 11:15 am
By Henry Woodbury
From the Harvard Business Review comes a cautionary tale of bias and visualization. Visual information can make people overly confident in predicting outcomes. In the study described in the article, viewers who watched a computer animation of driver error “were more likely to say they could see a serious accident coming than those who actually saw it occur and then were asked if they had seen it coming.”
The way human brains process the sight of movement appears to be one reason for this outcome. The visceral reading of trajectory events — such as an animation of moving cars — creates an anticipatory judgment that is highly persuasive to higher brain functions.
Also important is the fact that every visualization incorporates a point of view, one that is all the more convincing for its visual immediacy:
The information can be conveyed with certain emphases, shown from certain angles, slowed down, or enlarged. (In a sense, all this is true of text as well, but with subtler effects.) Animations can whitewash the guesswork and assumptions that go into interpreting reconstructions. By creating a picture of one possibility, they make others seem less likely, even if they’re not. (my emphasis)
In essence, this is what we do on purpose. Whether for marketing, analysis, or scientific reportage, we quite explicitly present the story of the strongest possibility (which may well be that there are multiple possibilities). We do it ethically; we rely upon validated data to tell a story and honor the integrity of that data as we work. The Harvard study cautions us not to let our visual tools — especially our analytical tools — persuade us too easily of what the real story is.
February 2, 2010, 1:21 pm
By Henry Woodbury
An interesting article on “cognitive fluency” offers this great (ironic) infographic:
Reporter Drake Bennett leads with the fact that “shares in companies with easy-to-pronounce names do indeed significantly outperform those with hard-to-pronounce names.” He continues:
Other studies have shown that when presenting people with a factual statement, manipulations that make the statement easier to mentally process – even totally nonsubstantive changes like writing it in a cleaner font or making it rhyme or simply repeating it – can alter people’s judgment of the truth of the statement, along with their evaluation of the intelligence of the statement’s author (my emphasis).
However, the flip side of easy equals true — or “an instinctive preference for the familiar” as Bennett defines the concept — is that to generate reflection or curiosity, you may need to make things less familiar. It’s a good thing we know how to do both.
October 9, 2009, 2:12 pm
By Lisa Agustin
TED Blog just posted a followup interview with neuroscientist/artist Beau Lotto, whose specialty is studying the relationship between your brain and what you see. According to Lotto, “The light that falls onto your eyes is meaningless.” In other words, light falling on a surface by itself does not convey meaning. Rather, what we see is a product of history, environment, and observation. Lotto’s 2009 TED Talk, “Optical Illusions Show How We See” demonstrates that optical illusions are not visual tricks so much as a means for making sense of the world based on our accumulated knowledge:
Illusion is more a state of the world than it is a state of mind. What’s being presented to you is an unusual situation. What you see is what would have been useful, given that situation in the past…The far more interesting question is not that “context matters” — not that we see illusions — but why we see them. When you see illusions, you’re entertaining two realities at the same time. You’re seeing one reality (two gray squares look different) but you also know another reality (that the gray squares are, in fact, physically the same).
Lotto’s comments provide good food for thought from an information design perspective, since information (visual or otherwise) has no inherent meaning until we view it through a lens that takes into account what the intended audience cares most about– their needs and goals–a by-product of their experience, expectations, and environment.
To find out more, see Beau Lotto’s web site: http://www.lottolab.org/index.asp.
August 24, 2009, 8:52 pm
By Henry Woodbury
This is a not a post about a beer gauge. It is a post about cognitive bias. To quote from the item itself: “[Jean] Piaget studied the tendency to focus attention on only one characteristic. In our case: beer height not volume!”
The gauge is the invention of engineer and physicist Chris Holloway. The Wall Street Journal Numbers Guy, Carl Bialik, explains:
Holloway has noticed that the typical pour in a pint glass is less than a pint. And since the widest part of the glass is at the top — nearly twice as wide as the bottom — leaving just the top half-inch of the glass unfilled costs the customer nearly 15% of the pint he’s paying for. So what may look trivial to bartenders and to drinkers, thanks to our tendency to focus on height rather than width when taking the measure of liquids, is a serious tavern injustice.
Readers of Edward R. Tufte will remember that one grave mistake in visualizing data is to show one-dimensional data with two-dimensional graphics:
There are considerable ambiguities in how people perceive a two-dimensional surface and then convert that perception into a one-dimensional number. Changes in physical area on the surface of a graphic do not reliably produce appropriately proportional changes in in perceived areas. The problem is all the worse when the areas are tricked up into three dimensions. (The Visual Display of Quantitative Information, 2001, p. 71.)
Holloway had the reverse challenge — the problem people have perceiving differences in volume. The gauge turns three-dimensional data into a one-dimensional series. It is a portable liquid measure.
I will add that despite the elegance of the Holloway’s concept, the gauge as is could use some design improvement. Holloway has elected to emphasis even numbers of ounces. I’m not sure this helps readability. There’s also no reason to have the 6 oz. label offset from the 6 oz. line. Just make the card a quarter inch longer or so. Nor is there reason for the key or the “Glass edge” and “Beer surface” labels. Instead, replace all of this extra text with a 16 oz. line aligned to the top of the glass that incorporates some minimal description. For example:
16 oz beer / 0% missing
15 oz / 7%
14 oz / 13%
April 22, 2009, 8:27 am
By Henry Woodbury
Seth Godin at Gel 2006 explains how This is broken. What is broken? Almost everything.
Including Napoleon’s March to Moscow.
Starting at 17:53, Godin buries Edward Tufte in order to praise him. Note that Godin doesn’t really bother with the graph itself, but rather Tufte’s promotion of it as “the best graph ever made.” Godin responds:
I think he’s completely out of his gourd and totally wrong!
If you need to spend 15 minutes studying a graph you might as well read the text underneath. Godin then backs off. Tufte’s promotion of Napoleon’s March, he says, is an example of something “broken on purpose”:
For the kind of person you want to reach — they want to read a complicated difficult to understand graph and get the satisfaction of figuring it out, because then they get it…. Sometimes the best thing to do is break it for the people you don’t care about and just make it work for the people you do.
Watch the rest of the talk as well. It’s a very funny, pointed critique of bad information and product design.
March 26, 2009, 2:46 pm
By Kim Looney
Our Creative Director noticed something peculiar about the photo of Tenoumer Crater in Mauritania taken January 24, 2008, found on boston.com (NASA, Jesse Allen, NASA/GSFC/METI/ERSDAC/JAROS, U.S./Japan ASTER Science Team). The crater didn’t look crater-like; it looked like circle-shaped valley or like a cookie-cutter impression in some dough. After some experiments he discovered that if the crater is rotated 180 degrees, it looks like a crater should. Is it the lighting? Do we presume that light by default comes from the top of a picture? When he placed a second crater on the screen that could be rotated 360 degrees the interactions between the two began to get very interesting. The rotatable crater began to influence our perception of the stable crater. So we made an interactive movie to let you try for yourself. Sometimes you’ll need to look away from the screen to “flip” the image/s after rotation. See what your own experiences are!
November 12, 2007, 9:17 am
By Henry Woodbury
The Laboratory of Dale Purves MD at Duke University has a page of optical illusions and perceptual challenges. Interactive controls allow you to test the “illusion” part of each example while links to the empirical explanations describe why your brain interprets what it sees the way it does.
The website for San Francisco’s Exploratorium Museum of Science has a small gallery of similiar illusions, with shorter explanations.
October 4, 2007, 1:08 pm
By Henry Woodbury
In an interview in Inside Higher Ed, economist Robert Frank discusses the problem of teaching the fundamental concepts of his discipline. Researchers found that students coming out of an introductory economics class scored worse on an applicable exam than those who had never take any economics courses whatsoever. So Frank, with co-author Ben Bernanke, wrote a new standard text.
While economics is the pivot for the interview, Frank offers many insights about how people gather and use information:
The narrative theory of learning now tells us that information gets into the brain a lot more easily in some forms than others. You can get information into the student’s brain in the form of equations and graphs, yes, but it’s a lot of work to do that. If you can wrap the same ideas around stories, around narratives, they seem to slide into the brain without any effort at all. After all, we evolved as storytellers; that’s what we’re good at. That’s how we always exchanged ideas and information. And if a narrative has an actor, a plot, if it makes sense, then the brain stores it quite easily; you can pull it up for further processing without any effort; you can repeat the story to others. Those seem to be the steps that really make for active learning in the brain.
Then there’s this pithy definition of behavioral economics:
One of its founders, Amos Tversky, was a psychologist at Stanford. He liked to say his colleagues study artificial intelligence; he prefers to study natural stupidity — the cognitive errors people are prone to make. It’s not that we’re stupid, but we use heuristics, we use rules of thumb, and the heuristics work well enough on average across a broad range of circumstances, but unless you really understand the logic of weighing costs and benefits, it’s very easy to be fooled into making the wrong decision.
Sounds like usability research, no?
Frank is also author of The Economic Naturalist: In Search of Explanations for Everyday Enigmas and periodic essayist for the New York Times.
April 20, 2007, 12:32 pm
By Henry Woodbury
On a Mercator grid, artist Benjamin Edwards presents a Walmart projection: a world map that sizes nations by the number of goods they sell in Walmart.
The data was compiled in 2001 using a simple methodology:
Go to the nearest Wal-Mart from your present location. Inside each store, count as many objects as possible while noting their countries of origin.
To represent this data, Edwards roughly scales each country by percentage of the total product count, removes countries with zero results and places those remaining in approximate orientation. The result is crude but graphically effective.
But if you approach the map neutrally (elide the word “Walmart” from your brain), what does it mean? Compare the Walmart map to WorldMapper’s Total Population map.
Now my question is not “why is China so big?” but “why is India so small?” (And, “why Italy instead of France, Germany or Spain?”)
April 16, 2007, 2:15 pm
By Mac McBurney
In February, the Dynamic Diagrams staff made a field trip (some might say pilgrimage) to Edward Tufte’s day-long seminar, “Presenting Data and Information.” If you’ve ever heard of Edward Tufte, you have probably seen Napoleon’s March to Moscow, Charles Josef Minard’s visual explanation of Napoleon’s disastrous attempt to conquer Russia in 1812.
Tufte says, “it may well be the best statistical graphic ever drawn.” The graphic appears repeatedly in Tufte’s books, posters and brochures. At the recent seminar, I realized that the image has become a defacto corporate logo of Tufte and Graphics Press. At the seminar, the graphic was used in a sign directing participants from the hotel lobby to the upstairs lecture hall. It worked: Napoleon’s March quickly caught my eye and confirmed I was headed in the right direction.
Conventional wisdom v. six-variable masterpiece of information design
Because Napoleon’s March is so innovative, so lauded, so pervasive in Edwardtufteland and so emblematic of Tufte’s teachings, it was (I’m chagrined to admit) not easy for me to see that it undermines, rather than supports, the conventional view of the historical events. (Thanks to Piotr, creative director at d/D, for leading the way.)
Minard created his map to show the horrors of war. Tufte uses it to explain grand principles of data display. Both are succeessful, but Tufte misses an opportunity to emphasize just how powerful Minard’s graphic is. Tufte repeats the popular belief that “General Winter” defeated Napoleon’s army. I haven’t studied the history since high school, but this fits the image that sticks in my head: soldiers freezing to death.
In fact, according to Minard’s map, nearly three times as many French soldiers were lost (never mind the Russians) before the retreat and before the coldest weather. 90,000 died on the retreat–horrible to be sure — but 250,000 were lost before that. Only because the map follows Tufte’s grand principle number one, show the data, are we able to really question the conventional wisdom, ask useful questions and formulate alternate narratives. Now that I’m re-thinking my own understanding, I wonder why Tufte even mentions General Winter as the moral of the story.
In addition to temperature itself and the impending threat of winter, I suspect another factor strengthens the prevailing interpretation: recency bias. Only ten thousand French soldiers lived to tell the tale. They had just endured three months of immense suffering and witnessed the deaths of 90,000 comrades (90% of the retreating force). It’s hard to imagine their state of mind, but the previous summer was probably a distant memory.