Information Design Watch
March 30, 2011, 10:38 am
By Henry Woodbury
This is not the most complicated chart in the world of information design. But I like it for a very specific reason. I like it because it has a zero. The gray bar is the $1.5 trillion federal budget deficit. The red, blue, and pink bars are proposed spending cuts. I’ve posted a thumbnail next to the full-size chart to allow comparison in one glance.
March 9, 2011, 11:44 am
By Lisa Agustin
The Guggenheim Museum recently launched an interactive timeline to accompany its new exhibition, The Great Upheaval: Modern Art from the Guggenheim Collection, 1910-1918. This colorful interactive map and timeline highlights the era’s artists, artist groups, exhibitions, performing arts, publications, artworks, historic events, and cultural movements. Select one of these categories, then scroll across to choose a particular year. Corresponding dots appear on the map above, and clicking on a dot displays a lightbox overlay with more information (see detail above). Overall, the timeline works from linear, drill-down perspective: choose a cultural activity, year, and sample activity within that year. Navigating the “Selected Artworks” category gives users the most detail (as expected), with an image of the artwork, and links to the artist’s biography and to an essay about the artwork, both housed in the pre-existing online collection on guggenheim.org– a nice way to leverage and highlight what’s already available. Discovering these individual nuggets is a little like going on a treasure hunt. The user seeks and finds individual gems scattered throughout.
At the same time, though, this interactive is weak in terms of providing an integrated picture of the era overall. Part of what makes studying an artistic era so exciting is the chance to discover connections: between artistic disciplines, or between the arts and historic events. The timeline misses this opportunity by forcing users to choose only a single category (the checkbox-like bullet next to each category is misleading). Additionally, once you’ve selected a dot on the map, dots of other colors at the bottom of the lightbox (see above) are indictors of simultaneous activities, but these are only visual cues and not links. Investigating these further means selecting a different category for that year and clicking through individual dots to eventually make the connection yourself. Allowing for multiple category selection and including crosslinks to other categories at the lightbox level are straightforward ways to make the pieces of the timeline more tightly integrated, showing that the whole is greater than the sum of its parts.
The Great Upheaval is on display through June 1, 2011.
March 3, 2011, 3:38 pm
By Lisa Agustin
I love a good grid, with its precise measurements both horizontal and vertical. We’ve blogged about how grids and scales can serve as guideposts for discussing visual design, a subjective and therefore squishy topic. Now Smashing Magazine offers another take on this, suggesting that mapping clichés to the extremes of a scale can help guide discussions toward an original solution. The article goes on to explore four visual design problems faced by well-known designers, and the process each used to move away from tired, obvious approaches to fresh solutions. The article concludes with some tips for avoiding clichés which include–ironically–embracing them:
Start by drawing every association you come up with for the subject matter. Draw it quickly, and don’t be critical. At this stage, it’s not about making pretty pictures, and it’s not about evaluating your ideas (in fact, the ability to turn the critical part of your brain on and off is one of the most helpful tricks you can develop). Don’t try to avoid clichés — let them happen. Trying not to think of clichés is like the old joke where someone says ‘Don’t think of a pink elephant.’ It’s best to get them down on paper and get them out of your system. Once you’ve jotted down every association you can think of, take a break, come back and jot down a few more. Then, take a longer break…
While this advice is targeted toward designers, this is also good advice for anyone looking to develop a good idea, since it’s often the bad ideas that yield the good ones.
March 3, 2011, 1:27 pm
By Lisa Agustin
So you’ve just relaunched your redesigned web site or web application. You’ve addressed known user experience problems, met business requirements, and made sure the architecture is one that will accommodate future features, both known and unknown. Now here’s the tricky question: How will you know you’ve improved your user experience?
The broader question of how to measure success is one that we raise with our own clients at the beginning of every project, as this helps us figure out the organization’s priorities and focus. Definitions of success range from trackable statistics (“more users will see the catalog”) to anecdotal assessment (“employees will complain less about using it”).
There is no one-size-fits-all approach to measuring success. Moreover, with the exception of online survey tools like Zoomerang or SurveyMonkey, which can be used assess usability and satisfaction, most tools today are designed to measure success from a business or technical staff’s perspective, rather than the users’. Google’s researchers recognized this problem in assessing their own applications and developed the HEART metrics framework, a method of measuring user experience on a large scale.
The HEART framework is meant to complement what Google calls the “PULSE metrics” framework where PULSE stands for: Page views, Uptime, Latency, Seven-day active users (i.e., number of unique users who used the product at least once in the last week), and Earnings– clearly all stakeholder and/or IT concerns. While these statistics are somewhat related to the user’s experience (which pages get looked at, which items get purchased), these can be problematic in evaluating user interface changes:
[PULSE metrics] may have ambiguous interpretation–for example, a rise in page views for a particular feature may occur because the feature is genuinely popular, or because a confusing interface leads users to get lost in it, clicking around to figure out how to escape. A count of unique users over a given time period, such as seven-day active users, is commonly used as a metric of user experience. It measures overall volume of the user base, but gives no insight into the users’ level of commitment to a product, such as how frequently each of them visited during the seven days.
The HEART metrics framework offers a way to more precisely measure both user attitude and behavior, while providing actionable data for making changes to a product’s user interface. These include the following, which I’ve described very briefly here:
- Happiness. This metric is concerned with measuring the user’s attitude toward the product, including satisfaction, visual appeal and the likelihood that the user will recommend the product to others. The use of a detailed survey as a benchmark and then later as changes are implemented will cover this.
- Engagement. This measures a user’s level of involvement, which will depend on the nature of the product. For example, involvement for a web site may be as simple as visiting it, while involvement for a photo-sharing web application might be the number of photos uploaded within a given period. From a metrics standpoint, involvement can be assessed by looking at frequency of visits or depth of interaction.
- Adoption and Retention. These metrics explore behavior of unique users more in detail, going a step beyond the seven-day active users metric. Adoption metrics track new users starting within a given period (e.g., number of new accounts opened this month), while retention looks at how many of the unique users from the initial period are using the product at a later period.
- Task Success. Successful completion of key tasks is a well-known behavioral metric that relates to efficiency (time to complete at task) and effectiveness (percent of tasks completed). This is commonly tracked on a small-scale through one-on-one usability tests, but can be expanded to web applications by seeing how closely users follow an optimal path to completion (assuming one exists), or by using A/B split or multivariate testing.
But these metrics are not helpful on their own. They must be developed in the context of the Goals of the product or feature, and related Signals that will indicate when the goal has been met. The authors admit that this is perhaps the hardest part of defining success, since different stakeholders may disagree about project goals, requiring a consensus-building exercise.
From my perspective, there is also the additional challenge of clients having both the forethought and resources available to track these metrics in the first place. In many cases, measuring success requires a benchmark or baseline for comparison. Without this in place, the new design itself must serve as a benchmark for any future changes.
March 2, 2011, 5:03 pm
By Henry Woodbury
Self-described entrepreneur and gamer Brad Hargreaves has created a nicely multivariate chart on wealth creation. It is more philosophical than empirical, a way to frame a question rather than a survey. It is also self-explanatory, so I’ve only shown a portion of it below:
I found this at the Sippican Cottage blog, along with Sippican’s typically incisive summation:
1. Make money while you’re awake.
2. Make money while you’re asleep, too.
3. Make money even after you’re dead.
From the information design perspective I’m impressed by the labeling. The examples are well chosen and the repeated two-word “Own” phrases manage to indicate fairly clear distinctions despite their inherent subjectivity. That kind of parallelism is hard to carry off, especially seven times in a row. Why does Brett Favre fall under “Own Entities” instead of “Own Yourself”? I don’t know, but it works well enough to make the point.