September 9, 2010, 2:38 pm
How to Make a Lot of Data Look Like a Lot of Data
By Henry Woodbury
Stockmapper made Time Magazine’s list of the top 50 web sites for 2010. Stockmapper has a lot of data, but is it useful? I can’t tell. Which answers the question, at least for me.
The Stockmapper heat map, whether organized by ticker symbol, percent change, volume, or market cap, tells no high level story. You might as well use a spreadsheet with sortable columns. You can sort and filter, but not compare. Click on a filter such as market sector or country and Stockmapper rewards you with some neatly rendered bar charts, but the heat map is a failure.
In contrast, the Map of the Market offers a comprehensible high-level view of market trends. You can compare the activity of each sector of the market and see which are gaining or losing value. Market capitalization is shown by area which provides another way to compare sectors and individual stocks.
Martin Wattenberg created the Map of the Market well over a decade ago. The best of 1998 is better than the best of 2010.
I totally agree. Stockmapper lacks a structure that allows users to wrap their head around all that data.
Posted by Zach on September 9, 2010 at 3:09 pm
Better, or different?
I agree with the comments regarding simplicity that The Market does provide a”better” view (size (is this market cap? value? net income?), group by industry), however that’s not to say that the alternative couldn’t provide more functionality in this space (dynamic readjustment).
Posted by Joel on September 9, 2010 at 7:08 pm
The original concept of Wattenberg’s map was introduced by Ben Schneiderman at a workshop on visualizing multidimensional data at the IEEE conference on Scientific Visualization held in San Diego in 1991, as I recall….give or take a year. I chaired that workshop, and there were at least three other visualization modalities discussed at the time that have yet to be exploited for reading large, complex data sets. The best of 1988 beats the best of 2010 too. There are still-interesting stories buried under the debris of the 90s high-tech explosion.
HDTV was the headline at the 92 Boston conference, and ubiquitous computing, immersive 3D and diagnostic medical visualization held court at the 1990 San Francisco meet…with researchers from PARC, NYU, and the national supercomputing centers anticipating much of what was to be commonplace…20 years or more later.
What they *expected* to be commonplace, but what hasn’t been developed to any significant degree, is the affordances to parse large data sets as qualified surfaces and volumes which can be intuitively navigated despite their artificial nature. Ben Fry and Casey Reas excepted, this whole field has lain pretty fallow for a generation or more in computer time.
The breakthrough technology and research was in place at the time, particularly the psycho-physiology of human perception as a vastly more capable data-knowing tool than it had been given credit for. Keyword: pre-attentive processing. Challenge: integrate the pre-attentive perceptual work to the higher cognitive centers.
Stockmapper falls down on basic visualization requirements by using thin colored outlines in direct contact with each other to encode the status or value of the data bit, while leaving the bulk of perceptual space, the center of the cell, data neutral. Presenting relatively dominant areas of the data space in a manner that resists or defeats preattentive apperception is a typical newbie problem in visualizations in general. You gotta follow the money, to paraphrase Tufte in his various dicta concerning data ink.
Until the mid 80s, vision and its coupling to cognition tended to be seen as a passive pipeline, with successive mappings resulting in huge amounts of data being reduced to a few simple percepts or ideations. Because of some ground-breaking research, (cf. Triesman, Marchak, Rogowitz, etc.) it was realized that the visual processor in the human brain is a massively parallel computation center with its own unique “intelligence” and capacity for orientation in the enviornment — real or vitual.
Tentative forays into this area were made, with some researchers even patenting rudimentary data pattern recognition tools…Temple University was leader in this respect.
The twig snapped in the forest. But nothing came down the trail. I watched the careers of several of the leading researchers in this area trickle into academic tenure and marginal corporate research. My work with Lawrence Livermore in the early nineties has never been picked up and exploited. I am a web editor headed for retirement, watching for signs that data visualization will really come into its own any time before I visualize the last bright light of all.
Where are the real patrons of this art? Why haven’t Steve Jobs or the Google braintrust or the social networking wunderkind tried to leverage what could be the single most powerful new use of computers since the spreadsheet itself?
It’s quite a story. But no one is really looking.
Posted by zeitguy on September 14, 2010 at 2:45 pm