Information Design Watch

March 14, 2012, 1:29 pm

Eulogy for a Dinosaur

By Henry Woodbury

The venerable Encyclopedia Britannica has ceased print publication:

In an acknowledgment of the realities of the digital age — and of competition from the Web site Wikipedia — Encyclopaedia Britannica will focus primarily on its online encyclopedias and educational curriculum for schools. The last print version is the 32-volume 2010 edition, which weighs 129 pounds and includes new entries on global warming and the Human Genome Project.

Given the well-known collaborative editing model of Wikipedia, Gary Marchionini, the dean of the School of Information and Library Science at the University of North Carolina at Chapel Hill, makes an interesting claim about not only the competitive difference, but about the nature of knowledge itself:

The thing that you get from an encyclopedia is one of the best scholars in the world writing a description of that phenomenon or that object, but you’re still getting just one point of view. Anything worth discussing in life is worth getting more than one point of view.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (4) | Filed under: Business, Scholarly Publishing, Technology

February 14, 2012, 9:47 am

Big Data in the House

By Henry Woodbury

The New York Times Sunday Review highlights Big Data. Big Data is that rapidly backfilling reservoir of web analytics and real-world sensor data. It is also a million rivulets of meandering incident, logged at its portages and tracked by its jetsam. The projected revolution starts with the ability to find meaning from it all:

Most of the Big Data surge is data in the wild — unruly stuff like words, images and video on the Web and those streams of sensor data. It is called unstructured data and is not typically grist for traditional databases.

But the computer tools for gleaning knowledge and insights from the Internet era’s vast trove of unstructured data are fast gaining ground. At the forefront are the rapidly advancing techniques of artificial intelligence like natural-language processing, pattern recognition and machine learning.

Unfortunately, the examples in the article are not inspiring. There is a difference between real scientific discovery and arbitrage opportunities and other than engineering-driven examples such as Google’s robot-driven cars, most of the focus is on arbitrage opportunities.

Case in point is the invocation of Moneyball. I am a big fan of baseball sabermetrics, and, among those paying attention, the work of Bill James and other analysts has revolutionized the way people evaluate baseball players. But this is work on the margins. It doesn’t trump the expression of true talent that anyone can spot, and it doesn’t void the enormous impact of chance. Hubris may be more dangerous than confusion:

Big Data has its perils, to be sure. With huge data sets and fine-grained measurement, statisticians and computer scientists note, there is increased risk of “false discoveries.” The trouble with seeking a meaningful needle in massive haystacks of data, says Trevor Hastie, a statistics professor at Stanford, is that “many bits of straw look like needles.”

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Sports, Technology

January 18, 2012, 11:53 am

SOPA Day

By Henry Woodbury

Wikipedia (English) is blacked out.

Wikipedia (English) Blacked Out

Wikipedia is just one of many. Other sites, including Google, are acknowledging the protest.

Kirby Ferguson explains.

Update: This is off-topic for this blog, but it is important to note that free use is not just about the internet. On Wednesday the Supreme Court failed to overturn a 1994 Congressional act that removes thousands of musical texts from the public domain.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Design, Infographics, Social Media, Technology

January 13, 2012, 12:59 pm

The Cost of Research

By Henry Woodbury

As the rumble between intellectual property and free speech advances into the ring drawn by SOPA (the Stop Online Piracy Act), Michael B. Eisen draws attention to a fight on the undercard. Eisen, professor of molecular and cell biology, critiques The Research Works Act which, in his words:

…would forbid the N.I.H. [National Institutes of Health] to require, as it now does, that its grantees provide copies of the papers they publish in peer-reviewed journals to the library. If the bill passes, to read the results of federally funded research, most Americans would have to buy access to individual articles at a cost of $15 or $30 apiece. In other words, taxpayers who already paid for the research would have to pay again to read the results.

Supporters of the bill include many traditional publishers of medical research (ironically, one of its sponsors, Darrell Issa, Republican of California, is one of SOPA’s most prominent opponents).

Dynamic Diagrams has a long history of working with scientific publishers going back over 15 years. We worked with major journals like Nature and JAMA to bring them fully online; we’ve also worked with research aggregators such as HighWire and Publishing Technology. We’re well aware of the technology and information management demands required just for online presentation, let alone the physical and specialist costs of creating a print publication. Now consider the editorial investment required to guide content to a publishable state (even if, as Eisen points out, peer review is provided voluntarily, often by researchers at publicly-funded institutions). Just for example, at a tactical level, most journals require an access-controlled transactional web space for authors and editors to exchange drafts.

This is not to take sides in the argument, but to draw attention to the real costs associated with managing and presenting electronic information. These should not be disregarded. At Scientific American, the comments section to Michelle Clement’s call for opposing the bill offers some back-and-forth (hopefully Clement won’t follow through on her threat to delete those comments she doesn’t like), including a link to the Association of American Publisher’s competing point of view.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Scholarly Publishing, Technology

December 28, 2011, 12:06 pm

What’s This Mobile Thing For, Again?

By Lisa Agustin

With more and more folks jumping on the smartphone bandwagon, and clients asking for mobile as part of their redesign projects, it’s not unusual to see articles on how to make your site mobile, or the latest design trends for mobile apps. How to develop for mobile is one of the forefront concerns of many web designers. But how about the Why? What are the specific advantages of mobile other than its ability to keep you distracted (productive?) while standing in line? Back in 2008, author and former Nokia executive Tomi Ahonen expounded on the unique opportunities of mobile as the “7th mass media channel” (print is the first, and Internet is the sixth). Conveniently, there are also seven unique capabilities of mobile media, which he summed up this way:

1 – The mobile phone is the first personal mass media
2 – The mobile is permanently carried media
3 – The mobile is the only always-on mass media
4 – Mobile is the only mass media with a built-in payment mechanism
5 – Mobile is only media available at the point of creative inspiration
6 – Mobile is only media with accurate audience measurement
7 – Mobile captures the social context of media consumption

These are not necessarily unique observations. But Ahonen’s perspective is one that puts mobile in the context of the media that preceded it, showing just how far technology has come. As an example, consider his first point, that mobile is the “first personal mass media”:

Never before was any mass media assumed to be private. Books and magazines are shared. Movies watched together. Radio we can have the whole family in the car listening at the same time. Records are played to a roomfull of wedding guests by the DJ. TV is watched together by the family. The internet is semi-personal, but often the PC is shared by the family or business employees. Our secretary or IT tech support (or Human Resources staff) may read through our emails. At home our parents often “snoop” what the kids do on the family PC etc. The internet is not a personal media, even if it often seems like it. But mobile. That is mine, and only mine.

Although the stats and facts are a little dated (the iPad had yet to make its debut), his post is a good read, and a reminder of why mobile represents an exciting opportunity in terms of creating innovative user experiences. It’s not just about Angry Birds.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Business, Technology, User Experience

December 20, 2011, 10:22 am

HTML Sunrise

By Henry Woodbury

Paul Irish AND Divya Manian have teamed up to create a superb visual explanation that shows browser support for HTML5 and CSS3. Rolling over each spoke of the sunrise (to mix a metaphor) reveals the name of the component; clicking takes you to the W3C page that defines it.

While 2011 support for current common browsers is the most useful view, Irish and Manian have provided data for 2008, 2009, and 2010 as well. In the slideshow below I show a screenshot of each of the four views. It makes a nice animation.

HTML Readiness 2008
HTML Readiness 2009
HTML Readiness 2010
HTML Readiness 2011
  • HTML Readiness 2008
  • HTML Readiness 2009
  • HTML Readiness 2010
  • HTML Readiness 2011

The visual is created with HTML5 and CSS3, so it is best viewed with an current browser. Don’t even bother with MSIE 7.

(via the LinkedIn Web Standards Group)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Diagrams, Information Design, Technology, Visual Explanation, Web Interface Design

November 29, 2011, 10:09 am

I’d Rather Have a Rocket Car

By Henry Woodbury

In the old days the future was about rocket cars. Now it’s about touch screens.

This Microsoft production is one of the vision videos that’s been making the rounds:

It’s cool, but also cold. And it’s one of the best of the bunch (Corning’s A Day Made of Glass is also very good). Others, such as the awkward imitations produced by Research In Motion (Blackberry) invite only ridicule.

Interface designer Bret Victor has produced an intelligent critique of the Microsoft video (and, by extension the whole genre). He starts by reminding us of the incredible sensory and manipulative powers of the human hand:

There’s a reason that our fingertips have some of the densest areas of nerve endings on the body. This is how we experience the world close-up. This is how our tools talk to us. The sense of touch is essential to everything that humans have called “work” for millions of years.

But what is the sensory experience of Microsoft’s future (and Corning’s, and Apple’s, and RIM’s)? It’s the feel of glass. It’s “glassy.”

Now read this: The 5 Best Toys of All Time. I think you’ll get my point.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Marketing, Technology, Usability, User Experience

November 25, 2011, 11:41 pm

Orientation Ratio

By Henry Woodbury

Folks well into Apple mobile development may have already run across Adam Lisagor’s take on the iPad’s aspect ratio.

If not, here it is.

Aspect Ratios of iPad and iPhone

To elaborate a little, the visualization points to more than just dimensions:

But it was clear in the device’s orientation when Steve first pulled it out, and in the orientation of the Apple logo on the back, that the iPad (…) is meant primarily to be used in portrait mode, that its function as a video device is really secondary to its function as a reading device. And 9:16 is now, and will probably always be too damn skinny for a screen.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Illustration, Technology, Visual Explanation

November 2, 2011, 1:18 pm

The New and Improved Google Reader! Slightly Dingy and Now with Dark Patterns!

By Lisa Agustin

I use Facebook, but was not one of those people who grumbled about the latest changes. I accept that technology is about looking forward, convergence makes sense in many cases, and that improving the user experience means continually tweaking an information architecture and visual design to reach whatever your bigger goal may be (e.g., conversions).

But then Google released its redesign of Reader, and we went from this:

to this:


[Image credits: SheGeeks.net]

Google calls the design “cleaner, faster, and nicer to look at.”  But after reading their announcement more closely, it’s really more about creating a tighter integration with Google+ by turning off Reader’s friending, following, shared items and comments in favor of similar Google+ functionality. Which is okay, since I do see the point of consolidating Reader’s social aspect with Google+. But the redesign has actually made sharing harder, not easier. Former Google Reader Product Manager Brian Shih puts it this way:

Keep in mind that on top of requiring 3-4 times as many clicks, you also now must +1 a post publicly to share it, even if it’s shared to a private circle. That bears repeating. The next time you want to share some sexy halloween costumes with your private set of friends, you first must publicly +1 the post, which means it shows up on your profile, plus wherever the hell G+ decides to use +1 data. So much for building a network around privacy controls.

But then later, an update:

It turns out there is a way to share without +1′ing first. If you click on the top right “Share…” field on the OneGoogle bar [the bar at the very top of the pane], you can bypass the +1 button. It’s just completely undiscoverable.

Sounds like a dark pattern to me.

But let’s put Google+ aside, since sharing wasn’t why I used Reader in the first place. It was about the content. How quickly can I see what’s new and get to an individual story? From an information design perspective, I’d think making the design cleaner would mean maximizing space for original content. Rather it seems they did the opposite, with a thicker/more spacious header bar that pushes content further down the page.

From a visual design standpoint, greeted by a new absence of color, I wondered if they were trying to make it look like a traditional newspaper, removing colored elements as if they were distractions? While there is such a thing as too much color, the new Reader goes overboard in the other direction. With black, white, and grey being the dominant scheme, it’s hard to tell what the priority is in the UI. Google even eliminated the use of the bright blue link color that facilitates scanning.  Now nothing stands out–except for the bright red Subscribe button and the blue Search button.  Maybe it’s time to revisit the pluses of eye candy.

Kvetching aside, I suppose I will get used to the new direction (assuming I don’t switch feeds first).  I also guess I had better brace myself for the upcoming Gmail redesign.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Information Architecture, Information Design, Social Media, Technology, User Experience, Web Interface Design

October 14, 2011, 3:52 pm

Goodbye to the King of the Invisible

By Henry Woodbury

Dennis Ritchie has died. Ritchie was the Bell Labs Researcher who invented the C programmer language and teamed with colleague Ken Thompson to build Unix. Fellow Bell Labs alumnus Rob Pike described his contribution this way:

“Pretty much everything on the web uses those two things: C and UNIX. The browsers are written in C. The UNIX kernel — that pretty much the entire Internet runs on — is written in C. Web servers are written in C, and if they’re not, they’re written in Java or C++, which are C derivatives, or Python or Ruby, which are implemented in C. And all of the network hardware running these programs I can almost guarantee were written in C.”

Contrasting Ritchie’s passing with that of the iconic Steve Jobs, MIT’s Martin Rinard says:

“Jobs was the king of the visible, and Ritchie is the king of what is largely invisible…. Ritchie built things that technologists were able to use to build core infrastructure that people don’t necessarily see much anymore, but they use everyday.”

Things like the underlying OS of the MacBook Pro I’m using to write this.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Technology

October 6, 2011, 10:48 am

HeadsUP! Competition Invites Designers to Visualize Global Issues on a Large Scale

By Lisa Agustin

How do we make complex and urgent issues like global warming both understandable and memorable? HeadsUP! is an international competition that challenges designers to visualize critical global issues and create a shared sign for the public space– in this case, Times Square², the Thomson-Reuters/NASDAQ digital signboards in Times Square. The goal of HeadsUP!:

Working with global data on issues such as global groundwater levels, climate change and ocean acidification, designers will create a series of visual displays to translate abstract metrics into recognizable and actionable news. It is an opportunity to transform planetary data into a common sign combining the metaphorical power of the Doomsday Clock with the authority of data visualization and the immediacy of activist electronic billboards: a HeadsUP! Display for the planet.

The first challenge is a visualization of global groundwater trends, which indicate that groundwater reserves are currently threatened due to overuse. The winning entry will premiere on World Water Day, March 22, 2012, and run for one month. I do love this idea, although to me, the success of it will not only depend on making the data easier to understand, but giving passersby concrete steps they can take to make a difference.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Visual Explanation

September 23, 2011, 3:00 pm

Follow the Money

By Henry Woodbury

Even on mobile devices a web app can beat out a platform-specific app. That’s the case for The Financial Times (FT). FT spokesman Rob Grimshaw reports that their HTML 5 web app draws more readers for more page views than their now-discontinued Apple store app.

This is a nice success story for web developers, but there’s more going on than traffic:

…Apple takes a 30 percent cut of subscription revenue from users who sign up for apps in the store.

More problematic is that Apple wants to control subscriber data — valuable demographic information used by magazines and newspapers to sell advertising — from people who sign up for the app in the store.

For subscription-based publishers such as FT this is not a supportable position. One has to wonder if other successful subscription-based sites are equally dissatisfied.

Of course, what makes the FT story unique is that its web app replaced its Apple store app. For many organizations the platform app will never get built, not when a comprehensive web development effort can leverage some common UI and code to target both desktop and mobile users.

“App stores are actually quite strange environments,” Grimshaw said. “They are cut off from most of the Web ecosystem.”

Update: In regards to my last point before the last quote, Jason Grigsby’s Cloud Four critique of responsive web design is required reading. The mobile and desktop environments each deserve their own optimization.

(via a Tizra Facebook post)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Implementation, Technology, Web Interface Design

September 20, 2011, 10:14 am

Game Theory

By Henry Woodbury

Why would scientific experts call on gamers to solve problems in protein folding? Here’s why:

“People have spatial reasoning skills, something computers are not yet good at,” [Dr. Seth Cooper, of the UW Department of Computing Science and Engineering] said. “Games provide a framework for bringing together the strengths of computers and humans.”

The game goes by the name Foldit and is supported by University of Washington Center for Game Science.

When you first start playing the game takes you through a series of practice examples to get you familiar with the manipulations you can apply to a protein chain.

Foldit Intro Puzzle 3 of 32

If you get hooked, you continue to real problems. Already, game-generated models have helped researchers resolve the structure of previously undefined proteins. Researchers are also looking at some of the unfolding sequences used by Foldit players to develop better algorithms for computer analysis.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: 3D Modeling, Creativity, Scholarly Publishing, Social Media, Technology

August 16, 2011, 1:32 pm

The Organizational Context for Web Development

By Henry Woodbury

Why is it, asks Jonathan Kahn, that the user experiences that web teams envision and that organizations truly want to adopt often fail to meet expectations?

Here’s the problem: organizations are the context for our work, and when it comes to the web, organizations are broken…

Although we’re comfortable with the idea that the web is critical to organizations, we often miss the corollary: the web has changed the way organizations operate, and in many cases it’s changed their business models, too. When executives can’t see that, it causes a crisis. Welcome to your daily web-making reality.

Now some of Kahn’s exhortations cause me to roll my eyes. I’ve worked in a number of information-related fields in my career and I’ve heard variations on “we are the change agents” and “executives don’t get it” all the way through. But Kahn is right to demand an organization-wide framework for web development and he is right to point out the need for governance and measurement as well as strategy and execution.

And when you see an organization really commit to a comprehensive web strategy with creative follow-through, the results are obvious.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology, User Experience, Web Interface Design

August 7, 2011, 4:26 pm

Zeros to Zeros. Ones to Ones.

By Henry Woodbury

Old data never dies. It just degrades. That degradation, as University of Maryland professor Kari Kraus explains, takes a number of forms. The most obvious is the actual decay, oxidation, and corrosion of various data media. But equally problematic is the inevitable obsolescence of the hardware that reads the media and the software that opens the files.

Migrating data is a problematic exercise in lossy read and write while hardware/software emulators themselves are temporal: “emulators must eventually be moved to new computer platforms — emulators to run emulators, ad infinitum.”

The way forward, as Kraus sees it, is continued engagement with the original code:

Perhaps the most impressive effort to curate digital information is taking place in the realm of video games. In the face of negligence from the game industry, fans of “Super Mario Bros.” and “Pac-Man” have been creating homegrown solutions to collecting, documenting, reading and rendering games, creating an evolving archive of game history. They coordinate efforts and share the workload — sometimes in formal groups, sometimes as loose collectives. Nor does the data just sit around. These are gamers, after all, so they are constantly engaged with the files. In the process, they update them, create duplicates and fix bugs.

Despite often operating in legal gray areas, such curatorial activism can be a model for other digital domains. A similar pattern is emerging in data-intensive fields like genetics, where published data sets are often “cleaned” by third-party curators to purge them of inaccuracies.

What survives is what is interesting and accessible. We are constructing an archive of enthusiasm. What may be lost, in the end, is the data that is currently the most classified and proprietary.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

July 21, 2011, 12:23 pm

Crowd Control and the Web

By Henry Woodbury

We all know the problem. Any open-access forum, social network, or comments venue on the Internet can easily be overrun by savage attacks and inane vulgarities. Blogger and entrepreneur Anil Dash responds with a bold claim — “This is a solved problem“:

As it turns out, we have a way to prevent gangs of humans from acting like savage packs of animals. In fact, we’ve developed entire disciplines based around this goal over thousands of years. We just ignore most of the lessons that have been learned when we create our communities online. But, by simply learning from disciplines like urban planning, zoning regulations, crowd control, effective and humane policing, and the simple practices it takes to stage an effective public event, we can come up with a set of principles to prevent the overwhelming majority of the worst behaviors on the Internet.

Dash follows with a list of concrete actions responsible content owners need to take to manage the crowd. It all starts with having real people moderate the content. Dash links to Robert Niles at Online Journalism Review, who states flatly: “If you can’t manage comments well, don’t offer comments at all“.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Social Media, Technology, User Experience

July 13, 2011, 4:38 pm

The Death of Blogging in One Paragraph

By Henry Woodbury

Apparently Google+ is going to end blogging once and for all. First blogging was co-opted by big media. Second, blogging was trumped by short-form social media. In this framework the death of blogging can be summed up in one paragraph:

Remember blog rolls? Looking back, we can say they were the original Facebook Friend lists, or Twitter Followers, or Google+ Circles.

In other words, the network is more important than the content.

Facebook and Twitter and Google, each in their own way, make networking very easy. But when everyone is networked, what will everyone do next? Don’t say content. Content is hard.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (3) | Filed under: Social Media, Technology

June 9, 2011, 2:23 pm

Off the (Google) Grid

By Henry Woodbury

In IEEE Spectrum’s Special Report on the Social Web, Joshua J. Romero attempts to decouple himself — from Google. In his article “How I Learned to Live Google-free” he writes about retrieving his cloud data and picking alternative services, issues that touch on data handling, user experience, technology, and unintended consequences. His comment about single sign-on, for example, really resonated with me:

It’s easy to get seduced by the lure of a single sign-on. But managing multiple user accounts actually isn’t as much of an annoyance as we think it is. For me, it quickly became clear that my single Google account had mixed and muddled my personal and professional services and data.

Link through to take the survey about which Google services you use (with some notable omissions), and learn about the various alternatives Romero discovered.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Social Media, Technology, User Experience

May 31, 2011, 9:37 am

Twitter vs. the Academy

By Henry Woodbury

For some reason I was looking for examples of 19th century correspondent abbreviations. A search for “yr obt svt” called up an entertaining essay by Len Cassamas  titled, fittingly, “Yr Obt Svt”. This essay is a year old and hinges on a piece of old news (are folks still arguing about an Academy of English?) but speaks to the molding of language by media that is always current. Cassamas writes:

Much of the [Academy of English] fiasco seems to have been inspired by the various abbreviations that people use while texting or tweeting.  “You” becomes “u.”  “To” and “too” become “2.”  (We will assume that “two” becoming “2″ is acceptable to all.)  Now, I am speaking as someone who quite purposely avoids such abbreviations.  I also avoid using emoticons in the hope that the person on the other end of the communication can understand when I intend to be humorous or something of a scamp simply from the way that I string words together.  Perhaps I am deluding myself or overestimating my abilities, but I am willing to live with the consequences involved.

But, Cassamas adds, “the making of abbreviations is nothing new”. Here he brings up yr obt svt. And he links to a video clip of Stephen Fry talking about language with Craig Ferguson. Link through to view it.

By the way, Cassamas tweets at @rudyvalue.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Language, Technology

May 13, 2011, 3:50 pm

Let’s Go British

By Henry Woodbury

United States’ grammarians place commas and periods inside quote marks. The British style is to place them outside. The British have it right. According to Ben Yagoda at Slate, the practice is spreading:

…in copy-editor-free zones—the Web and emails, student papers, business memos—with increasing frequency, commas and periods find themselves on the outside of quotation marks, looking in. A punctuation paradigm is shifting.

Yagoda isn’t ready to credit the internet for this shift. He writes, “I spotlight the Web not because it brings out any special proclivities but because it displays in a clear light the way we write now.” But he does point out several ways in which the digital age affects usage. One that is embedded in my psyche is the logic of computer programming and markup language authoring. You don’t let stray characters inside your quotation marks. Period.

Another is the logic of international readership:

By far the biggest fount of logical punctuation today is Wikipedia, which was started by two Americans but whose English-language edition is by and for all English-speaking countries.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Language, Technology

March 3, 2011, 1:27 pm

Measuring User Experience on a Large Scale

By Lisa Agustin

So you’ve just relaunched your redesigned web site or web application.  You’ve addressed known user experience problems, met business requirements, and made sure the architecture is one that will accommodate future features, both known and unknown.  Now here’s the tricky question: How will you know you’ve improved your user experience?

The broader question of how to measure success is one that we raise with our own clients at the beginning of every project, as this helps us figure out the organization’s priorities and focus.  Definitions of success range from trackable statistics (“more users will see the catalog”) to anecdotal assessment (“employees will complain less about using it”).

There is no one-size-fits-all approach to measuring success.  Moreover, with the exception of online survey tools like Zoomerang or SurveyMonkey, which can be used assess usability and satisfaction, most tools today are designed to measure success from a business or technical staff’s perspective, rather than the users’.  Google’s researchers recognized this problem in assessing their own applications and developed the HEART metrics framework, a method of measuring user experience on a large scale.

The HEART framework is meant to complement what Google calls the “PULSE metrics” framework where PULSE stands for: Page views, Uptime, Latency, Seven-day active users (i.e., number of unique users who used the product at least once in the last week), and Earnings– clearly all stakeholder and/or IT concerns.  While these statistics are somewhat related to the user’s experience (which pages get looked at, which items get purchased), these can be problematic in evaluating user interface changes:

[PULSE metrics] may have ambiguous interpretation–for example, a rise in page views for a particular feature may occur because the feature is genuinely popular, or because a confusing interface leads users to get lost in it, clicking around to figure out how to escape.  A count of unique users over a given time period, such as seven-day active users, is commonly used as a metric of user experience.  It measures overall volume of the user base, but gives no insight into the users’  level of commitment to a product, such as how frequently each of them visited during the seven days.

The HEART metrics framework offers a way to more precisely measure both user attitude and behavior, while providing actionable data for making changes to a product’s user interface.  These include the following, which I’ve described very briefly here:

  1. Happiness. This metric is concerned with measuring the user’s attitude toward the product, including satisfaction, visual appeal and the likelihood that the user will recommend the product to others.  The use of a detailed survey as a benchmark and then later as changes are implemented will cover this.
  2. Engagement. This measures a user’s level of involvement, which will depend on the nature of the product.  For example, involvement for a web site may be as simple as visiting it, while involvement for a photo-sharing web application might be the number of photos uploaded within a given period. From a metrics standpoint, involvement can be assessed by looking at frequency of visits or depth of interaction.
  3. Adoption and Retention. These metrics explore behavior of unique users more in detail, going a step beyond the seven-day active users metric.  Adoption metrics track new users starting within a given period (e.g., number of new accounts opened this month), while retention looks at how many of the unique users from the initial period are using the product at a later period.
  4. Task Success.  Successful completion of key tasks is a well-known behavioral metric that relates to efficiency (time to complete at task) and effectiveness (percent of tasks completed).   This is commonly tracked on a small-scale through one-on-one usability tests, but can be expanded to web applications by seeing how closely users follow an optimal path to completion (assuming one exists), or by using A/B split or multivariate testing.

But these metrics are not helpful on their own.  They must be developed in the context of the Goals of the product or feature, and related Signals that will indicate when the goal has been met.  The authors admit that this is perhaps the hardest part of defining success, since different stakeholders may disagree about project goals, requiring a consensus-building exercise.

From my perspective, there is also the additional challenge of clients having both the forethought and resources available to track these metrics in the first place.  In many cases, measuring success requires a benchmark or baseline for comparison.  Without this in place, the new design itself must serve as a benchmark for any future changes.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Information Architecture, Technology, Usability, User Experience

February 10, 2011, 9:55 am

Cameron & Mittleman LLP Launches New Web Site

By Lisa Agustin

Dynamic Diagrams is pleased to announce that the web site for the law firm of Cameron & Mittleman LLP is now live.  The two main goals for this project were a refresh to the site’s design, and an easy way to maintain the web site in-house.   We provided the information architecture, visual design, and web development services, which included a move to the WordPress platform.  Content for launch includes the history of the firm, staff profiles, and practice area information.  The extensible solution will enable the organization to add features planned for the future, including a blog.  You can view the web site at http://www.cm-law.com/

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Dynamic Diagrams News, Information Architecture, Technology, User Experience, Web Interface Design

February 2, 2011, 1:25 pm

Demotic Internet

By Henry Woodbury

I’ve been reading Jacques Barzun’s magisterial history of western culture, From Dawn to Decadence. His final chapter on the late 20th century is titled “Demotic Life and Times,” “demotic” being a word that means “of the people” even if it happens to sound like “demonic.” Of the internet, Barzun writes:

That a user had “the whole world of knowledge at his disposal” was one of those absurdities like the belief that ultimately computers would think–it will be time to say so when a computer makes an ironic answer. “The whole world of knowledge” could be at one’s disposal only if one already knew a great deal and wanted further information to turn into knowledge after gauging its value.

Information isn’t knowledge. This fact points to a certain friction in the terms we use in our practice. Most often, an information architect really is concerned with information. The goal is to help individuals locate information in a context that helps them gauge its value. An information designer, however, is more focused on knowledge. The designer seeks to communicate ideas within a dataset. I wouldn’t advocate a change in terms. Knowledge designer sounds hopelessly pretentious. But the distinction between the two practices is important.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Books and Articles, Information Architecture, Information Design, Language, Technology

January 14, 2011, 2:17 pm

The 50 Pixel Hangover (Remodeling Dynamic Diagrams)

By Henry Woodbury

One significant target for our Remodeling Dynamic Diagrams project is the redesign of this blog. The interface designs are close to final now and have us thinking about how we will import current content. Unlike our primary web site we will not recreate content or images for Information Design Watch. Instead we will create a WordPress theme and apply it to the existing posts.

The issue is this. Our new blog design has a 640 pixel width content column. The current design has a 690 pixel width content column. Any image or object in our archives sized to the maximum setting of 690 pixels wide will not fit the new format.

We are approaching this issue in two different ways.

First, about month ago, we set 640 pixels as the maximum image size in the current theme. This means that recent images are already optimized to work within the new design.

Second, the new design features a wide content margin. Using a negative margin CSS technique, images up to 690 pixels can extend into this margin without obscuring sidebar links or breaking the column.

There is a third solution. We can manually edit each post with a 690 pixel width image and replace it. That one awaits a design intern.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Dynamic Diagrams News, Implementation, Technology, Web Interface Design

January 6, 2011, 10:43 am

Social Media for Designers

By Henry Woodbury

Combine social media with design and you might end up with a site like Dribbble (that’s with three b’s). Just make sure you also come up with an elegant user interface design and use an oddball basketball metaphor for the site vocabulary.

Excerpt from Dribbble home page, 6-Jan-2010

Like many successful social media sites, the underlying concept is simple. Where Twitter limits word count, Dribbble limits image size — to 300 x 400 pixels, max. Common social media elements like tags, comments, and fans enrich the experience. Fans and views drive a popularity index and an inexplicable “playoffs” page.

One of Dribbble’s innovations is the “rebound”, a graphical reply to another posted design. This is technically similar to sharing in Facebook or trackbacks in blogging, but Dribbble does a markedly superior job in presenting the cross-communication. Which is good, because cross-communication inspires better design.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Charts and Graphs, Design, Diagrams, Information Design, Social Media, Technology, Web Interface Design

December 9, 2010, 2:10 pm

The Phone Call as Community

By Henry Woodbury

If we define a community by evidence of social interaction, how well do political and historical boundaries hold up? That question is posed, and answered (in part) by a study of landline phone calls in Great Britain led by Professor Carlo Ratti of MIT’s SENSEable City Lab. Analysis of over 12 billion calls identified point-to-point geographical connections (defined at the sub-regional level to protect individual identity) whose relative strength was derived by the frequency and length of calls.  The result is a map that mostly aligns to familiar regions, but with some unexpected variations.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Information Design, Maps, Technology, Visual Explanation

December 7, 2010, 3:26 pm

What Color are Your Tentacles?

By Henry Woodbury

Build a Squid InterfaceThanks to the Museum of New Zealand, Te Papa Tongarewa, you can build your own squid.

The Build a Squid interactive is akin to the avatar-builders associated with online games and social media sites. It is a great example of the advantage of fewer choices. There are six components, each with three options, and, for all but eyes, the same palette of 14 colors, blends and patterns. All the naming and design options are accessible all the time. You can cycle through options using “Next” and “Previous” but there’s no need to be sequential.

Also refreshing, for a man who has spent a number of hours logging his children into Disney web sites, is the absence of terms, permissions, and validations. Which isn’t surprising since once you create your squid and drag it back and forth across your screen, you’re pretty much done. All you can do is release your virtual creature into the virtual deep.

How interesting is that?

Interesting enough I guess. After you release your squid, it is easy to find it again using its name or your email. You can check its age, weight, and mileage and drag it around the screen again.

Since the real point of Build a Squid is to drive traffic to Te Papa’s colossal squid exhibition it would seem to be doing its job. The application is two years old and hosts plenty of squid.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Social Media, Technology, Visual Explanation

December 3, 2010, 10:18 am

Meta Works (Remodeling Dynamic Diagrams)

By Henry Woodbury

In Tim’s last post on Remodeling Dynamic Diagrams he mentioned our decision to use web fonts. By maintaining font files on our server and referencing them via @font-face calls in our CSS files, we can bring to our web presence the Meta typeface we have long used in our diagrams, presentations, print collateral and Flash animations.

This demo page shows the Meta Web version we have purchased for the site redesign. Internally we have tested it on Internet Explorer 6, 7, and 8, and current versions of Firefox, Safari, and Google Chrome (such incremental browser testing is part of our process). It also works on the iPhone’s Safari browser.

If the fonts on the demo page don’t resemble the image below on your browser, let us know!

A sample of Meta

UPDATE (December 9, 2010): As Andy mentions in the comments, the lower-case y in Meta Web Medium renders with a flaw. This appears on all Windows-based browsers. We’ve reprocessed the fonts and uploaded a new demo.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Dynamic Diagrams News, Implementation, Technology, Typography, Web Interface Design

November 30, 2010, 10:01 am

The Very Small, in Added Color

By Henry Woodbury

The scanning electronic microscope (SEM) does not produce images in color. What it does produce are images of almost crystalline focus. In this gallery of pollen grains by scientist Martin Oeggerli the detail is original; the color is added:

The clarity of the image derives from the technology, wherein ”the electron beam is shifted little by little over a rectangular area. Thereby, the area is literally ‘scanned’ from one pixel to the next.” Analysis of secondary electron emissions allows scientists to map the specimen’s surface:

Unlike pictures captured with a camera, SEM scans are based on particle emission rather than light – they don’t show colors and brightness depends from the characteristics of the sample surface: while dark areas mark low secondary electron emission, bright areas are the result of high secondary electron emission. Thus, an SEM scan could be seen as a topographic image with very close resemblance to a black-and-white photograph.

Oeggerli adds the color later. Here, he explains his technique:

Most importantly, you need to understand how nature works to create authentic effects. My images need a color-costume, which combines natural perfection with imperfection, to mimic the often very subtle individual variations provided by the raw material for natural selection.

But nature doesn’t exactly work the way Oeggerli records. His “nature”, like that of Dutch pronkstilleven or Pixar movies, is brighter and more chromatic than reality.

The images are really precise — but not really real.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Art, Color, Technology

October 14, 2010, 11:44 am

Don’t Tweet the Scoop

By Henry Woodbury

I’m going to link you to a Bill Simmons football column at ESPN. Here we go.

If you’ve never encountered the Bill-Simmons-stream-of-consciousness style of sports writing before, you might wonder where I’m going. Ostensibly the column is about how Simmons accidentally tweeted a trade rumor involving wide receiver Randy Moss. And it is. But it also contains a host of interesting observations about how Twitter affects reporters, how a media company like ESPN responds, and the consequences of that interaction.

How does Twitter affect reporters?

Twitter, which exacerbates the demands of immediacy, blurs the line between reporting and postulating, and forces writers to chase too many bum steers.

How does ESPN handle that fact?

We have a rule at ESPN that all breaking news must be filtered through our news desk (not tweeted). That’s why our reporters (Schefter, Stein, Bucher, whoever) tweet things like, “JUST FILED TO ESPN: Timberwolves sign Frederic Weis to $35 million deal.” Even if I wanted to tweet something like the Moss scoop, technically, I couldn’t do it without flagrantly violating company rules.

What are the consequence?

In the Twitter era, we see writers repeatedly toss out nuggets of information without taking full ownership. It’s my least favorite thing about Twitter (because it’s wishy-washy) and one of my favorite things about Twitter (because nonstop conjecture is so much fun for sports fans)…. Call it “pseudo-reporting”: telling your audience that you think something happened or that you heard something happened, and somehow that sentiment becomes actual news.

The other thing Simmons points out: Don’t direct message and tweet at the same time.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Social Media, Sports, Technology

October 12, 2010, 10:56 am

Gregor Hohpe’s Diagram Manifesto

By Henry Woodbury

Why a diagram? We know how to communicate the value of diagrams to clients already attuned to visual thinking. The challenge is to reach those for whom the idea is unfamiliar. One of the best arguments I’ve encountered comes not from a designer, but from a software architect. In his manifesto on Diagram Driven Design, Gregor Hohpe speaks to his immediate audience (software architects) and their focus on rigorous technical documentation, but makes the broad case as well:

Drawing a picture forces us to clean up our thinking, lest we run out of paper. Do we depict the data flow, the class structure, or implementation detail? While a picture does not automagically make this problem go away, it puts it in your face much more than a meandering chain of prose, which from afar may not look all that bad. A well-known German proverb proclaims that “Papier is geduldig” (paper is patient), meaning paper is unlikely to object to what garbage you scribble on it. Diagrams tend to be a little less patient, and expose a wild mix of metaphors and abstractions more easily.

Hohpe doesn’t forgive poorly designed diagrams (“bad diagrams are not a useful design technique”), but warns against blaming the messenger:

If you are unable to draw a good diagram (and it isn’t due to lack of skill), it may just be because your actual system structure is nothing worth showing to anyone.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Diagrams, Information Design, Technology

August 30, 2010, 10:03 am

Teaching Many Many People in a Leveraged Way

By Henry Woodbury

My title is Bill Gates talking. He is talking about Sal Khan, Harvard MBA, former hedge fund manager, and now the one man show behind online learning site Khan Academy. Here is Gates at more length:

There’s a web site that I’ve been using with my kids recently called Khan Academy, K H A N, just one guy doing some unbelievable 15 minute tutorials…. He was a hedge fund guy making lots of money and he quit to do these little web videos and so we’ve moved I’d say about 160 IQ points from the hedge fund category to the teaching-many-many-people-in-a-leveraged-way category and so that was a good day — the day his wife let him quit his job.

Khan’s YouTube videos feature his voice and an electronic blackboard that present bitmap images and (mostly) Khan’s notes and annotations. Here’s an example, Basic Multiplication:

This approach is extremely efficient and extremely effective. Speaker and blackboard (or whiteboard). That’s all.

When Gates talks about “leverage” this is part of what he means. The pedagogical simplicity of Khan’s approach makes his materials very accessible and allows him to develop his lectures quickly. Their succinctness allows him to tailor each one to a specific level of ability. The other aspect of “leverage” is technological. By using the common YouTube video format, Khan can reach anyone and everyone with a decent Internet connection. There are no additional distribution barriers. Makers of educational software should take note.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Information Design, Technology

August 10, 2010, 11:54 am

The Dugout Canoe Description of My Job

By Henry Woodbury

The Edge Annual Question for 2010 goes out to a bevy of deep thinkers:

How is the Internet Changing the Way You Think?

Is it? That’s up to you. Editor and Publisher John Brockman anticipates the point:

We spent a lot of time going back on forth on “YOU” vs. “WE” and came to the conclusion to go with “YOU”, the reason being that Edge is a conversation. “WE” responses tend to come across like expert papers, public pronouncements, or talks delivered from stage.

Science historian George Dyson offers an evocative response:

In the North Pacific ocean, there were two approaches to boatbuilding. The Aleuts (and their kayak-building relatives) lived on barren, treeless islands and built their vessels by piecing together skeletal frameworks from fragments of beach-combed wood. The Tlingit (and their dugout canoe-building relatives) built their vessels by selecting entire trees out of the rainforest and removing wood until there was nothing left but a canoe.

The Aleut and the Tlingit achieved similar results — maximum boat / minimum material — by opposite means. The flood of information unleashed by the Internet has produced a similar cultural split. We used to be kayak builders, collecting all available fragments of information to assemble the framework that kept us afloat. Now, we have to learn to become dugout-canoe builders, discarding unneccessary information to reveal the shape of knowledge hidden within.

Give us a tree and we’ll carve your canoe. That is what Tim Roy is talking about.

(via Andrew Gilmartin who linked to Dyson’s quote on Facebook. Andrew blogs here.)

Update: I rewrote my lede, up to the Dyson quote, to add context and incorporate Brockman’s “you” vs. “we” statement.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Books and Articles, Information Architecture, Information Design, Technology

July 27, 2010, 3:05 pm

The Asynchronous Barista

By Henry Woodbury

Say you’re a software engineer trying to explain asynchronous processing to people with a general interest in software. You might use Starbucks as an example. Over to you, Gregor Hohpe:

Starbucks, like most other businesses is primarily interested in maximizing throughput of orders. More orders equals more revenue. As a result they use asynchronous processing. When you place your order the cashier marks a coffee cup with your order and places it into the queue. The queue is quite literally a queue of coffee cups lined up on top of the espresso machine. This queue decouples cashier and barista and allows the cashier to keep taking orders even if the barista is backed up for a moment. It allows them to deploy multiple baristas in a Competing Consumer scenario if the store gets busy.

This is a quirky article that introduces a number of programming concepts in an accessible and entertaining way. Hohpe throws in the occasional deep dive — as with the “Competing Consumer” link in the quote — but even there the analogy helps you guess where such a link might take you.

Analogy speaks to shared experience. It provides a way — one way — to turn abstract concepts into visual explanation. I can almost see the coffee cups lined up in front of me.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Creativity, Implementation, Language, Technology

July 19, 2010, 11:11 am

Review, Reuse, Inflate

By Henry Woodbury

One of our favorite design interns, Jonathan O’Conner is on to bigger things. Much bigger.

Billboard Balloon

Last summer Jonathan helped us out with his 3D modeling skills on a 21 inch monitor. This summer, with a team of fellow industrial designers, he is figuring out how to reuse giant plastic billboard sheets.

Check out their blog for a look at their creative process (the multi-colored post-it notes look familiar), brainstorms, technical investigations, and prototypes.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Creativity, Dynamic Diagrams News, Prototyping, Technology

June 8, 2010, 10:05 am

The Medium is Not the News

By Henry Woodbury

Who’s going to save the news? According to James Fallows, Google is. Fallows makes two key points. First, explicitly, that Google is serious about improving the economics of news gathering. Second, implicitly, Google had better be doing it because traditional news publishers are clueless:

…people inside the press still wage bitter, first-principles debates about whether, in theory, customers will ever be willing to pay for online news, and therefore whether “paywalls” for online news can ever succeed. But at Google, I could hardly interest anyone in the question. The reaction was: Of course people will end up paying in some form—why even talk about it? The important questions involved the details of how they would pay, and for what kind of news.

The inefficiency of traditional news organizations is far more profound than the costs of grinding up trees into pulp and running “them through enormously expensive machinery” to hand-deliver a product that is almost immediately out-of-date. That lets television news off the hook. The inefficiency seen by Krishna Bharat, director of Google News, is the redundancy of thousands of publications writing about the same events using the same, predictable, story lines. On this score television’s focus on the blindingly obvious is, in my opinion, far more offensive than print.

The message to traditional news organizations isn’t just “go online.” It’s “start being distinct.” In a global, decentralized, electronic medium, boilerplate reporting deserves to be bested by smarter, deeper, more eclectic aggregations.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

May 27, 2010, 11:15 am

We Promise to Use Our Powers Wisely

By Henry Woodbury

From the Harvard Business Review comes a cautionary tale of bias and visualization. Visual information can make people overly confident in predicting outcomes. In the study described in the article, viewers who watched a computer animation of driver error “were more likely to say they could see a serious accident coming than those who actually saw it occur and then were asked if they had seen it coming.”

The way human brains process the sight of movement appears to be one reason for this outcome. The visceral reading of trajectory events — such as an animation of moving cars — creates an anticipatory judgment that is highly persuasive to higher brain functions.

Also important is the fact that every visualization incorporates a point of view, one that is all the more convincing for its visual immediacy:

The information can be conveyed with certain emphases, shown from certain angles, slowed down, or enlarged. (In a sense, all this is true of text as well, but with subtler effects.) Animations can whitewash the guesswork and assumptions that go into interpreting reconstructions. By creating a picture of one possibility, they make others seem less likely, even if they’re not. (my emphasis)

In essence, this is what we do on purpose. Whether for marketing, analysis, or scientific reportage, we quite explicitly present the story of the strongest possibility (which may well be that there are multiple possibilities). We do it ethically; we rely upon validated data to tell a story and honor the integrity of that data as we work. The Harvard study cautions us not to let our visual tools — especially our analytical tools — persuade us too easily of what the real story is.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Business, Cognitive Bias, Marketing, Technology, Visual Explanation

May 25, 2010, 11:38 am

Saint Ginés Wins MUSE Award

By Henry Woodbury

Dynamic Diagrams and the J. Paul Getty Museum have won a  2010 Silver MUSE award for the Getty-produced video Making a Spanish Polychrome Sculpture. Dynamic Diagrams created the 3D animation that opens the video and shows how the XVII century sculpture was assembled. The Getty integrated this animation with live action footage that shows carving and surface treatment techniques. The effectiveness of this combination was noted by many of the judges:

This is a fine example of technology effectively used to clearly demonstrate an intricate artistic process. It’s the combination of the digital imagery with the live footage of an artist that makes this video exciting and fascinating for all kinds of audiences

The MUSE awards are presented annually by the American Association of Museums’ Media and Technology committee. They recognize “institutions or independent producers which use digital media to enhance the museum experience and engage new audiences.” We are proud to work with The Getty on projects of such scope and distinction.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: 3D Modeling, Art, Current Events, Dynamic Diagrams News, Technology, Visual Explanation

May 25, 2010, 11:16 am

Creative Destruction

By Henry Woodbury

Wired runs a very interesting piece on Pixar and how it, among all Hollywood studios, manages to produce hit after hit. One factor in their success is the stability of their team. Another is their ability to shred through ideas:

Every few months, the director of each Pixar film meets with the brain trust, a group of senior creative staff. The purpose of the meeting is to offer comments on the work in progress, and that can lead to some major revisions. “It’s important that nobody gets mad at you for screwing up,” says Lee Unkrich, director of Toy Story 3. “We know screwups are an essential part of making something good. That’s why our goal is to screw up as fast as possible.”

I really like this framework for the creative process. Creative ideas — in design as well as film making — build from iteration, from critical review and rework. The time to run through this process of creative destruction is the concept stage — “to screw up as fast as possible.” Once you move into production, rethinking costs much more time and money. The importance of concept development is something we always try to communicate to our clients.

But I would add that the ability to respond to criticism starts with the stability and talent of the team. General Creighton W. Abrams put it this way:

The only way to get anywhere with kicking ass is with an outfit that is already good.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: 3D Modeling, Business, Creativity, Technology

May 11, 2010, 10:46 am

Using Twitter to Keep Up With H1N1

By Lisa Agustin

Whenever a new disease emerges, web sites for the World Health Organization (WHO) and Centers for Disease Control and Prevention (CDC) become the go-to for the latest on epidemiology and the global implications of a given threat. But “informal surveillance sources” like Internet news sites and direct reports from individuals are becoming increasingly important for identifying early outbreaks of diseases, according to a report in the latest issue of The New England Journal of Medicine.  Such is the case with HealthMap (shown above), an interactive disease-tracker created as part of the Journal’s H1N1 Influenza Center.  So far, the site has collected 87,000 reports (both formal and informal) to monitor the spread of the H1N1 virus.  The wealth of data collected through HealthMap enabled researchers to follow the pandemic’s spread both geographically and across a given timeframe, while enabling new areas of investigation.  For example, the report’s authors compared a country’s lag time between identifying suspected and confirmed cases with its 2007 national gross domestic product.  (A side note: Crowdsourcing for the greater good isn’t new; the Ushahidi platform was initially developed to map both formal and informal reports of violence in Kenya after the post-election fallout at the beginning of 2008, and has since been used to monitor federal elections in Mexico, the spread of H1N1, and relief activity in post-earthquake Haiti.)

There are both pros and cons to using informal sources.  In the case of emerging outbreaks, the advantages relate to the speed with which news reports are broadcast (unusual outbreaks receive intense coverage), and the ability of individual health professionals to pick up weak signals of disease transmission across borders.   However, the difficulty in confirming diagnosis “presents challenges for validation, filtering, and public health interpretation.”  Validating individual sources of information will become a bigger issue with the next version of HealthMap.  While the current version uses individual reports from “reliable” sources (e.g., International Society for Disease Surveillance), work is underway to draw from blogs, Twitter, and Facebook.   As the ability to post and share reports from the ground becomes easier, verification processes will need to be more rigorous without compromising the delivery of timely information.  The maps that solve this challenge will become indispensible.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Maps, Technology

May 7, 2010, 9:02 am

Murderer or Mohel? With Kindle, It Depends

By Lisa Agustin

Steven Levy tells how his download of Stephen Hunter’s latest novel came with a typo on the title page. While fixing e-book typos seems like the right thing to do, don’t assume it will happen automatically, at least not with Amazon’s Kindle. The company learned its lesson with last summer’s secret deletion of George Orwell’s 1984.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

May 4, 2010, 10:17 am

Jobs Takes Flash to the Mat

By Henry Woodbury

Get your ringside seats for the Apple vs. Adobe fight, right here.

Apple CEO Steve Jobs tries the headscissors takedown:

Besides the fact that Flash is closed and proprietary, has major technical drawbacks, and doesn’t support touch based devices, there is an even more important reason we do not allow Flash on iPhones, iPods and iPads… We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform.

Adobe CEO Shantanu Narayen bounces back with a half nelson leg sweep (video here):

The technology problems that Mr. Jobs mentions in his essay are “really a smokescreen,” Mr. Narayen says. He says more than 100 applications that used Adobe’s software were accepted in the App Store. “When you resort to licensing language” to restrict this sort of development, he says, it has “nothing to do with technology.”

Meanwhile, Adobe plans to demo Flash for Google’s Android OS this month — and give Android phones to all of its employees.

By the way, here’s Rey Mysterio performing the headscissors move:

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Sports, Technology, User Experience, Web Interface Design

May 3, 2010, 1:28 pm

Social Media: The Means to the Ends

By Henry Woodbury

I’m no Jeremiah, but this critique of Facebook’s approach to privacy is quite unsettling:

When you think about Facebook, the market has very specific incentives: Encourage people to be public, increase ad revenue.

The speaker is Microsoft’s Danah Boyd. She doesn’t get into horror stories. She just nails the paradigm.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Marketing, Technology, User Experience

April 28, 2010, 11:21 am

The Examined Life, by the Numbers

By Lisa Agustin

Gary Wolf offers an in-depth look at how number-crunching is no longer confined to the workplace or the realm of geeky habits, but has become mainstream, thanks to technology (think automated sensors and video) and online tools created specifically for the personal tracking of just about everything, including health, mood, productivity, and location.  Why all the self-interest?  According to Wolf, for some it’s a matter of answering a question, measuring changes, or reaching a goal (that last ten pounds!), but it may also be about reclaiming some piece of ourselves from the “cloud”–that vague, global network to which we entrust what is personal (photos, addresses, random thoughts, etc.):

One of the reasons that self-tracking is spreading widely beyond the technical culture that gave birth to it is that we all have at least an inkling of what’s going on out there in the cloud. Our search history, friend networks and status updates allow us to be analyzed by machines in ways we can’t always anticipate or control. It’s natural that we would want to reclaim some of this power: to look outward to the cloud, as well as inward toward the psyche, in our quest to figure ourselves out.

Read the full story to see links to notable tracking projects– or feel free to start your own.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Charts and Graphs, Current Events, Technology

March 17, 2010, 12:06 pm

Your Data is my Distraction

By Henry Woodbury

I recently ran across a still-fresh 2009 Nieman Journalism Lab post on “ambient visual data” — a good term for the practice of graphically incorporating metadata into a content-delivery interface. The most common idea seems to be adding subtle bar charts beneath or around links to illustrate various kinds of popularity.

To explain the importance of the concept, author Haley Sweetland Edwards turns to designer Eliazar Parra Cardenas, creator of Backbars, “a GreaseMonkey script to turn the headlines and comments of social link-sites into ambient bar charts (of votes/diggs/views/users…).” Cardenas explains:

“The whole point is to make textual information easier to absorb… [A well-designed site] should maximize the information that a user can understand — that you can just glance at, or take note of -– without actively thinking….

“We’ve already tried the obvious in print: putting as much text as possible in one glance (hence broadsheets), mixing in images, headlines, columns. I think the next step will be digital developments like backbars, favicons, sparklines, word coloring, spacings.”

Count me as extremely skeptical. The sites that Edwards and Cardenas hold up as examples seem both cluttered and shallow — a vote-stuffing contest for “news of the weird.”

I’m old school that way. What drives traffic are the editorial and authorial inputs that Cardenas overlooks in his list of the obvious. Not headlines, but well-written headlines. Not images, but compelling images. Not backbars, favicons, sparklines, word coloring, and spacings, but good ledes.

The New York Times isn’t making money online. But they aren’t lacking for traffic.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Charts and Graphs, Information Design, Technology, Usability, User Experience, Visual Explanation, Web Interface Design

February 17, 2010, 3:33 pm

Next Steps for Augmented-Reality Maps

By Lisa Agustin

Fresh from the TED2010 conference: an amazing talk by Blaise Aguera y Arcas, an architect at Microsoft Live Labs, in which he demonstrates how Photosynth software is transforming cartography into a user experience: first by stitching static photos together to create zoomable, navigatable spaces, then with superimposed video for a swear-you-are-there experience.  Not to be missed.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Maps, Technology, User Experience

February 16, 2010, 9:47 am

Old Search Engines Never Die…

By Henry Woodbury

Jacob Gube at design site Six Revisions uses the Way Back Machine to create a “then and now” piece on search engines. It’s worth a look, just for the screenshots. Chrome and content tell a story. For most of those still around, lots of chrome and lots of links have faded away, replaced by minimal chrome and minimal links.

HotBot and WebCrawler are still around. They look like Google.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Web Interface Design

January 30, 2010, 9:23 pm

Real-Time Bus Location

By Henry Woodbury

LMA Shuttle Map

Using GPS and Google Maps, MASCO — the Medical Academic and Scientific Community Organization, Inc., of Boston, Massachusetts — offers this elegant real-time bus map for its shuttle service. The map shows buses in service, their location, and their direction of travel.

For folks waiting at the bus stop, the service is accessible via web-enabled phone at http://shuttles.masco.org/m.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Information Design, Maps, Technology, Visual Explanation, Web Interface Design

January 29, 2010, 4:21 pm

Why Hospital Data is Growing: Genetic Testing, EMRs, and the “-Ologies”

By Lisa Agustin

As a UX consultant with clients in the healthcare arena, I’m not unfamiliar with the kinds of data needed by employees on a hospital intranet, or by current and prospective patients on a public site.   But I’m usually more concerned about the best way for people to find this information, rather than where it’s coming from and how it gets managed.

So I was especially interested in the Global Information Industry Center at UCSD’s eye-opening report on data growth in hospitals.  (If the group’s name sounds familiar, these are the same folks who recently concluded that Americans consumed 3.6 zettabytes of information per day in 2008).  Eleven healthcare IT executives (nine from major medical research-focused medical centers and two from medical insurance organizations) were asked to estimate their future rates of growth and identify the reasons behind it.

Author Jack Robert identifies the following as the six main drivers of growth:

  • Image Technology. The number of images generated by each “ology” (radiology, cardiology, etc.) is growing both locally and centrally.  The ability to create thinner and denser slices of organs makes for huge images; a single slice can be as big as 1 gigabyte.
  • Web 2.0 Applications. Web-based software that enables a medical team to provide care collaboratively is a growing trend.  An example of this: medication list management, where every care team member has the ability to list, delete, or annotate a patient’s list of medicines.
  • Clinical Decision Support for Physicians. Physicians are treating more patients, and thus have less time to spend with each one.  As a result, they expect instant, anytime access to data and specialty applications to help in their decision making process. This means a heavy reliance on electronic medical records for patients (see next).
  • The Electronic Medical Record (EMR). The number of hospitals and physician offices using this is currently small, but wider adoption is expected in the coming years for two reasons: the realization that the current healthcare system is too expensive, and the federal stimulus incentives that will be given to hospitals and clinicians who demonstrate “meaningful use” of EMRs by 2011.  The what-if scenario of an EMR for every person in the U.S. is staggering in terms of data size.
  • Health Networks. These community initiatives will connect physicians, hospitals, health centers, labs, and patients electronically, building upon the capabilities of EMRs by collecting information from multiple medical sites, processing it, and providing it to physician offices.
  • Genetic Testing.  Genetic testing for high-risk patients will serve as another potentially huge source of  new data, given that “there are now 2500 diseases for which there is a genetic test.”

The main problem with this growth is how best to manage it all.  Much of the data is decentralized (especially in the case of research data), difficult to backup due to the increasing size of databases, and replicated by default (e.g., the  processing of a blood workup means information is duplicated and stored in multiple places).   But while there is no easy answer on how to address the exponential growth of medical data, the one hope is that the end result is improved and more efficient care for all patients.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

January 20, 2010, 6:32 pm

What Price Content?

By Henry Woodbury

The New York Times has announced that it will initiate a pay-for-access model starting in early 2011. The general framework is to give visitors a limited number of free articles each month before invoking a flat fee for unlimited access. Print subscribers will also have unlimited access.

While The Wall Street Journal and The Financial Times both charge for access, the Times is significantly more popular:

NYTimes.com is by far the most popular newspaper site in the country, with more than 17 million readers a month in the United States, according to Nielsen Online, and analysts say it is easily the leader in advertising revenue, as well.

While analysts point out that this gives the Times ample resources to adjust their scheme if they start losing readership, the Times‘ revenue problems reflect a broader reality. Yet if the newspaper does succeed in setting up a successful micropayment model, other media sites will follow.

Looking forward, I can easily imagine the Times using the leverage of its popularity and reputation to cut deals with major ISPs. People already pay for different packages of cable television channels. The same broadband providers could apply the same business model to charge for a package of subscription web sites. If it works for The New York Times, ESPN will follow.

The low-hanging fruit is the smart phone market. For a few nickels a month, Verizon or AT&T can provide the subscription tied into the app that accesses it.

UPDATE: The Times offers a Q&A. Most interesting is their determination to accept incoming links from other web sites:

Q. What about posting articles to Facebook and other social media? Would friends without a subscription then not be able to view an article that I think is relevant for them? — Julie, Pinole CA

A. Yes, they could continue to view articles. If you are coming to NYTimes.com from another Web site and it brings you to our site to view an article, you will have access to that article and it will not count toward your allotment of free ones.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

January 12, 2010, 3:42 pm

Death by Aggregation

By Henry Woodbury

In an interview for his book You Are Not a Gadget (scroll down) technologist Jason Lanier looks around and sees Internet dystopia:

Web 2.0 collectivism has killed the individual voice. It is increasingly disheartening to write about any topic in depth these days, because people will only read what the first link from a search engine directs them to, and that will typically be the collective expression of the Wikipedia. Or, if the issue is contentious, people will congregate into partisan online bubbles in which their views are reinforced….

Web 2.0 adherents might respond to these objections by claiming that I have confused individual expression with intellectual achievement. This is where we find our greatest point of disagreement. I am amazed by the power of the collective to enthrall people to the point of blindness. Collectivists adore a computer operating system called LINUX, for instance, but it is really only one example of a descendant of a 1970s technology called UNIX. If it weren’t produced by a collective, there would be nothing remarkable about it at all.

Meanwhile, the truly remarkable designs that couldn’t have existed 30 years ago, like the iPhone, all come out of “closed” shops where individuals create something and polish it before it is released to the public. Collectivists confuse ideology with achievement.

At The New York Times, John Tierney takes Lanier’s critique and runs with it — in a different direction. Where Lanier seeks to change the software technologies that undermine individual content creators, Tierney questions the net culture of intellectual property theft:

In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.

UPDATE: Glenn Harlan Reynolds of Instapundit reviews You Are Not a Gadget in today’s Wall Street Journal. While agreeing in part with Lanier’s critique of the failure of the aggregate model to reward creative individuals, he points out that social media applications are popular because they are fun – and not just for geeks:

Mr. Lanier is nostalgic for that era [the 1990s] and its homemade Web pages, the personalized outposts that have largely been replaced by the more standardized formats of Facebook and MySpace. The aesthetics of these newer options might be less than refined, but tens of millions of people are able to express themselves in ways that were unimaginable even a decade ago. And let’s face it: Those personal Web pages of the 1990s are hardly worth reviving. It’ll be fine with me if I never see another blinking banner towed across the screen by a clip-art biplane.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Technology

December 22, 2009, 11:18 am

Mashing Up Suggestions

By Henry Woodbury

In The New York Times, IBM scientists Fernanda Viégas and Martin Wattenberg have some fun with search engine auto-suggestions. Type in even a single word and you receive “a list of suggested, presumably popular completions.” (In courtroom dramas, this is called leading the witness.)

The fun is seeing how different investigations overlap. Here’s one example:

Popular completions for 'are diets' and 'is chocolate'

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Charts and Graphs, Diagrams, Information Design, Language, Technology

November 6, 2009, 2:54 pm

Cellphone as Paintbrush

By Lisa Agustin

cell-tango

Cell Tango is an evolving digital installation that dynamically organizes images transmitted by cellphone based on cellphones’ area codes, carriers, time and date of transmission, and participants’ contributed categories and descriptive tags.  Created by artists George Legrady and Angus Forbes, the exhibit is not so much an artist’s vision as it is an audience vision–one that suggests that everyday images taken with your cellphone camera could, in fact, mean something more.  Legrady suggests:

Will cellphone technology transform how we create/use images produced “on the fly”? In what ways do online visual databanks such as Flickr recontextualize the images we create and share? Can such online images be used creatively as components in artistic works that explore the construction of visual narratives through the juxtaposition of sequenced images? What may be relevant implementation of voice annotation to add metadata to images?

Cell Tango will be on display at Wellesley College in Wellesley, MA, through December 13.

See also:

George Legrady’s web site

Review of Cell Tango in The Boston Globe

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Art, Photography, Technology

November 6, 2009, 11:58 am

The Virtue of Forgetting

By Henry Woodbury

Viktor Mayer-Schönberger, author of the newly published Delete: The Virtue of Forgetting in the Digital Age, points out that for humans, forgetting is an important way of organizing and prioritizing information. Digital storage, however, has made forgetting almost impossible — yet what is stored is devoid of context and may not apply to the individual of the present.

In an interview with Nora Young on the CBC radio show Spark 90, Mayer-Schönberger elaborates on the cognitive issues of memory and what this means for Google, social networking web sites, and other digital spaces:

Now today there are few human beings who, for biological reasons, cannot forget. What sounds like a blessing, they certainly do remember where they parked their car in a shopping mall. It turns out that they have tremendous difficulties in acting in time, in deciding in time, because they remember all their bad, failed decisions in the past, and therefore hesitate to make a decision in the present.

Because they’re forever tethered to the past, they can’t act, they can’t stay put in the present, and they can’t imagine the future. I fear that with digital comprehensive memory, we might resemble these human beings, and we might lose our ability to act in time….

[I]f you ask young people who share a lot of information on social networking sites, and YouTube, Flickr, and so forth, they still are concerned about their informational privacy….  The problem is in a lot of circumstances, young and older people don’t realize when they share information on the Internet that this information not only is shared with potentially everybody, but that this will also remain accessible potentially for a very long period of time.

Once we begin to become aware of these implications, once we begin to acknowledge and understand that digital memory is comprehensive and enduring, we may become extremely more cautious in what we do online.

Mayer-Schönberger proposes that online information be associated with an expiry date, an idea that is being adopted by some social networking sites:

What I want is a world that is teeming with information sharing and information exchange, of experiences being shared among people, but also a world in which we are aware that information is not endless, but has a life span, just like the yogurt in our refrigerator might expire over time.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Technology

November 5, 2009, 12:25 pm

Follow the Necktie

By Henry Woodbury

It is always interesting to me to see how designers using different methods tackle some of the same visualization challenges that we do. How do you represent an abstract idea like “mobility” or “business”?

Here is Virtualization in Plain English, a marketing video for Intel made by Common Craft.

Still from Virtualization in Plain English

Keep track of that necktie.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Comics, Illustration, Information Design, Technology, Visual Explanation

October 16, 2009, 10:10 am

Infographics for Web Workers

By Lisa Agustin

xkcd-map-of-online-communities

Web Design Ledger offers a collection of infographics of special interest to web workers, including process flows, data driven visualizations, and musings (like xkcd.com’s Map of Online Communities, above).  Enjoy.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Art, Charts and Graphs, Maps, Technology, Visual Explanation

October 15, 2009, 8:23 am

It’s Mysterious in English, Too

By Henry Woodbury

The Wall Street Journal reports that the French are stymied in their attempt to come up with the proper French term for “cloud computing:”

To translate the English term for computing resources that can be accessed on demand on the Internet, a group of French experts had spent 18 months coming up with “informatique en nuage,” which literally means “computing in cloud.”

France’s General Commission of Terminology and Neology — a 17-member group of professors, linguists, scientists and a former ambassador — was gathered in a building overlooking the Louvre to approve the term.

“What? This means nothing to me. I put a ‘cloud’ of milk in my tea!” exclaimed Jean Saint-Geours, a French writer and member of the Terminology Commission.

“Send it back and start again,” ordered Etienne Guyon, a physics professor on the commission.

And so they have.

My brother reports that the Japanese have no such compulsions. By email he writes:

Japanese borrow English terminology with such carefree abandon that at times even I wonder sometimes why they didn’t use the Japanese equivalent. Though there are so many homophones in Japanese that it can be very convenient to have words whose meanings are confined to a specific context. The English “out,” for example, is used widely in sports: an “out” in baseball, a ball that is “out” in tennis, the “out nine” (and “in nine”) of a golf course.

“Cloud computing” in Japanese is “kuroudo konpuutingu”.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Language, Technology

September 16, 2009, 11:05 am

Mobile Accessibility

By Matt DeMeis

Not being an iPhone owner, I can’t personally comment on the ease of use of the device. Regardless, I was impressed by this video on the accessibility features of the 3GS. It’s hard for anyone with their eyesight to grasp just how well this would work for someone who is visually impaired, but to me it seems like Apple did great job.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Usability

September 4, 2009, 1:12 pm

The Times Goes Google on Us

By Henry Woodbury

I just discovered the New York Times Developer Network.

This resource provides data from The Times to third party developers through content-related APIs:

Our APIs (application programming interfaces) allow you to programmatically access New York Times data for use in your own applications. Our goal is to facilitate a wide range of uses, from custom link lists to complex visualizations. Why just read the news when you can hack it?

Most or all of the APIs respond to a query by returning data in XML or JSON format. Some developers have built custom search engines and topic-specific mashups around this functionality. Others are more interested in the sheer excess of the data — and how it can be visualized.

Artist Jer Thorp is one of the latter. Thorp accesses the Times Article Search API to create visualizations that compare the frequency of key words over time. The image below, for example, compares ’sex’ and ’scandal’ from 1981 – 2008:

NYTimes: Sex & Scandal since 1981

When you zoom in, the visualization reveals branching segments called “org facets”. Thorp writes:

[These are] organizations which were associated with the stories that were found in the keyword search. This is one of the nicest things about the NYTimes API – you can ask for and process all kinds of interesting information past the standard “how many articles?” queries.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Charts and Graphs, Current Events, Information Design, Technology, Visual Explanation, Web Interface Design

September 2, 2009, 2:20 pm

What’s Wrong with this Chart?

By Henry Woodbury

Federal Spending FY 2009 YTD

The chart, of Federal Spending FY 2009 YTD, is from USAspending.gov, a web site mandated by law to provide the public free, searchable information about U.S. Federal expenditures.

Seth Grimes at Intelligent Enterprise figures out the problem and its cause:

USAspending.gov produces its charts dynamically using the Google Chart API…[but] passes values to Google that are out of range. Google truncates them, just as [its] documentation explains.

Here is Grimes’ corrected chart:

Federal Spending FY 2009 YTD, corrected

Unfortunately, data misrepresentation isn’t the only problem he finds.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Charts and Graphs, Technology, Usability, Web Interface Design

August 18, 2009, 4:09 pm

“The credits sequence cost more than most films made up to that point.”

By Henry Woodbury

I’m talking about Superman (1978). Here are the opening credits:

Today, this is a student project. Here’s a version by “saucejenkins” done in After Effects for “a Digital Editing & Compositing class”:

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: 3D Modeling, Technology

August 17, 2009, 3:46 pm

Stop Motion Marketing

By Henry Woodbury

This is a response to a D&AD Student Award “bespoke creative brief” by Hewlett-Packard. Titled HP – invent, it was created by Matt Robinson and Tom Wrigglesworth.

I just wish it were longer.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Art, Design, Marketing, Technology

July 28, 2009, 12:16 pm

“Both stayed close to the mound where the Eagle set down, except for Armstrong’s quick jaunt over to the rim of East Crater to shoot some photos of the outfield.”

By Henry Woodbury

To provide context for the first walks on the moon by Neil Armstrong and Buzz Aldrin, NASA provides us with a map of the Sea of Tranquility superimposed over a baseball diamond. The Lunar Module is situated on the pitchers mound with the activity of the astronauts indicated as tan paths. This shows a blob of extensive activity around the module and a number of longer walks by each astronaut.

Apollo 11 Traverse Map on Baseball Diamond

Created by Thomas Schwagmeier from a suggestion by Eric Jones, the map is part of the NASA Apollo 11 Image Library. To really appreciate the details (including a legible key), click through to the full size version.

What looks like the original for the overlay is Schwagmeier’s elegant rendition of the “Traverse Map” — Figure 3-16 from the Apollo 11 Preliminary Science Report. The two maps are shown side-by-side below. As with the baseball overlay, click through to the full size versions to see all the detail.

Apollo 11 Traverse Map by Thomas Schwagmeier Apollo 11 Travers Map, Scientific Report

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Current Events, Infographics, Information Design, Maps, Technology, Visual Explanation

July 20, 2009, 12:12 pm

Cloudy Predictions

By Henry Woodbury

In the New York Times, Harvard Law Professor Jonathan Zittrain voices his objections to cloud computing. Zittrain brings up obvious privacy and security concerns, but then makes the case for a more fundamental risk:

But the most difficult challenge — both to grasp and to solve — of the cloud is its effect on our freedom to innovate. The crucial legacy of the personal computer is that anyone can write code for it and give or sell that code to you — and the vendors of the PC and its operating system have no more to say about it than your phone company does about which answering machine you decide to buy. Microsoft might want you to run Word and Internet Explorer, but those had better be good products or you’ll switch with a few mouse clicks to OpenOffice or Firefox.

While Zittrain does well to call out Apple and its approach to iPhone apps in a later paragraph, he missteps here. Apple long outdid Microsoft in its corporate control over the peripherals and software that would run on its hardware. As for Microsoft, only an anti-trust case forced the giant software maker to share its application programming interfaces with third-party developers.

Given the holes in Zittrain’s alternate history, his fears about the freedom to innovate in the cloud have to convince on their own merits — and they do not. Facebook does not control the Internet, nor does the iPhone dominate the smart phone market (ever heard of the Blackberry?) While Facebook, Amazon, or Google could turn into “a handful of gated cloud communities whose proprietors control the availability of new code” the underlying infrastructure is out of their control in a way that was never true of the old PC world.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Business, Technology

July 3, 2009, 9:39 am

Innovation at Wimbledon

By Henry Woodbury

Britain's Andy Murray serves to Stanislas Wawrinka of Switzerland under the closed roof on Centre Court, during their match at the Wimbledon tennis championships in London, on June 29. (Kieran Doherty/Reuters)

The most visible innovation is the retractable roof over Centre Court.

But this year’s Wimbledon Championships at the All England Club is also host to several IT innovations, most dramatically a smartphone application that superimposes match data on top of the phone’s video display.

IBM, Wimbledon’s long-term IT partner, developed the “Seer Android” app for the T-Mobile G1 mobile phone:

Pointing a G1 phone at a court, for example, would tell the user the court number, details of the current and previous matches and Twitter comments from experts and players, such as Andy Murray and Roger Federer.

The championship’s first official Twitter feeds are also up and running at @Centre_Court and @Wimbledon.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Current Events, Sports, Technology

June 15, 2009, 12:17 pm

Erudition Analytics

By Henry Woodbury

There’s more to data mining than click-through rates and advertising revenues. This Zachary Seward article at the Nieman Journalism Lab (via Althouse) explains how the New York Times examines user behavior as it relates to their style. Using a Web analytics report of words most often looked-up by Times readers, deputy news editor Philip Corbett sent out the memo to reporters and columnists:

Our choice of words should be thoughtful and precise, and we should never talk down to readers. But how often should even a Times reader come across a word like hagiography or antediluvian or peripatetic, especially before breakfast?

Remember, too, that striking and very specific words can become wan and devalued through overuse. Consider apotheosis, which we’ve somehow managed to use 18 times so far this year. It literally means “deification, transformation into a divinity.” An extended meaning is “a glorified ideal.” But in some of our uses it seems to suggest little more than “a pretty good example.” Most recently, we’ve said critics view the Clinton health-care plan as “the apotheosis of liberal, out-of-control bureaucracy-building,” and we’ve described cut-off shorts as “that apotheosis of laissez-faire wear.”

So what do we say if someone really is transformed into a god?

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Language, Technology, User Experience

May 19, 2009, 11:16 am

Twitter as Public Art

By Lisa Agustin

vistweet1vistweet2

Check out “Visible Tweets”, a visualization of Twitter intended for public spaces or, as creator Cameron Adams puts it, “a Twitter visualizer for rock concerts.” Simply enter whose tweets you’d like to see, and choose one of three animation styles to see the tweets letter by letter, rotating as they are linked to each other, or as a tag cloud that morphs from one tweet into the next. Adams’ allusion to rock concerts stems from his assertion that Twitter is normally about the chatter that takes a back seat to the main event (but doesn’t have to):

Twitter gives a voice to an audience who for many years have played a subservient role to those who were officially there to speak. But who says they have less to say?

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Art, Information Design, Social Media, Technology

May 18, 2009, 12:21 pm

This is Not a Painting

By Henry Woodbury

The Persistance of Memory

Take a look at the Art of Science 2009 Gallery for some stunning images generated by researchers in a wide variety of scientific disciplines.

The image above is an unusual example in that it starts with an artistic representation. Researchers loaded a bitmap of the Mona Lisa into the memory of a test computer, then examined it after power interruptions of increasing lengths.

The title “The Persistence of Memory” is both literally descriptive of the experiment and a clever reference to Salvator Dali’s most famous painting.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Art, Photography, Technology

May 11, 2009, 8:39 am

War Games with Firewall

By Henry Woodbury

The U.S. Defense Department graduates about 80 students from its cyberwar schools. Here is a very cool article about how they are tested:

…the young man in battle fatigues barked at his comrades: “They are flooding the e-mail server. Block it. I’ll take the heat for it.”

These are the war games at West Point, at least last month, when a team of cadets spent four days struggling around the clock to establish a computer network and keep it operating while hackers from the National Security Agency in Maryland tried to infiltrate it with methods that an enemy might use.

My grandfather served in World War I running telegraph lines from balloon observation posts. Today he would be writing code.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

April 30, 2009, 12:46 pm

Happy Birthday, Dad of Info Theory

By Lisa Agustin

Per Wired, on this date in 1916, Claude Elwood Shannon, the father of information theory and the man who coined the term “bit,” was born:

Shannon’s 1938 master’s thesis, A Symbolic Analysis of Relay and Switching Circuits, used Boolean algebra to establish the theoretical basis of modern digital circuits. The paper came out of Shannon’s insight that the binary nature of Boolean logic was analogous to the ones and zeros used by digital circuits.

His paper was widely cited, laying the foundations for modern information theory. It has been called “one of the most significant master’s theses of the 20th century.” Not bad for a 22-year-old kid from a small town in Michigan.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

April 21, 2009, 8:27 am

Ban Comic Sans?

By Henry Woodbury

Comic Sans didn’t spring to life on its own from the primordial Windows ooze. Typographer Vincent Connare designed it:

…one afternoon, he opened a test version of a program called Microsoft Bob for children and new computer users. The welcome screen showed a cartoon dog named Rover speaking in a text bubble. The message appeared in the ever-so-sedate Times New Roman font.

Connare went to work on creating an appropriate comic font for Bob. Not long after a Microsoft product manager included his creation as a standard font in Windows and the spread of Comic Sans began. The spread of efforts opposed to it soon followed.

Connare retains a wry appreciation for his most famous work:

“If you love it, you don’t know much about typography,” Mr. Connare says. But, he adds, “if you hate it, you really don’t know much about typography, either, and you should get another hobby.”

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Comics, Technology, Typography

April 8, 2009, 11:46 am

Cloud Computing: To Manifesto or not to Manifesto

By Kim Looney

An article on Economist.com, “Clash of the Clouds,” brought to my attention the the recent publishing of a cloud computing manifesto.

What is cloud computing you say? Well, according to the manifesto:

 …cloud computing is really a culmination of many technologies such as grid computing, utility computing, SOA, Web 2.0, and other technologies…”. And it’s key characteristics are “the ability to scale and provision computing power dynamically in a cost efficient way and the ability of the consumer (end user, organization or IT staff) to make the most of that power without having to manage the underlying complexity of the technology.

All that sounds good, and open standards also sound good for the consumer. But what about providers and associated technology businesses? There are some conspicuous absences on the list of supporters for the manifesto. I don’t know much about standards creation or technology policy-making, but I will be watching for developments on how cloud computing finds its place as a staple in computing technology — and not only so I can represent it visually for a client.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Current Events, Technology

April 7, 2009, 8:52 am

The Power of Humble Documents

By Henry Woodbury

In 1968 a handful of computer scientists began trying to figure out what to do with the rudimentary network they had designed for the government. Graduate student Stephen Crocker volunteered to write up the notes on protocol. Thus were born the “Request for Comments” that became ” the formal method of publishing Internet protocol standards.”

Crocker describes how an open process invited participation and the sharing of ideas:

…we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard. 

Arguably, this not only made the Internet possible, but laid the foundation for the open source movement and other cooperative software and computing ventures.

For more details on that rudimentary network, one can read Michael Hauben’s “History of ARPANET: Behind the Net – the untold history of the ARPANET — or — The ‘Open’ History of the ARPANET/Internet”.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

March 5, 2009, 1:52 pm

Correlation is not Causation, not Even on Facebook

By Henry Woodbury

Caltech graduate student Virgil Griffith stirs the statistical pot:

Griffith used aggregated Facebook data about the favorite bands and books among students of various colleges and plotted them against the average SAT scores at those schools, creating a tongue-in-cheek statistical look at taste and intelligence….

Griffith came up with the idea as a way to show how to take two separate sets of data that were pretty straightforward on their own – in this case, the average SAT score and the favorite books among students at various universities – and combine them to become more interesting. Griffith says, “Their unity is hilarity incarnate. This is to inspire people to think creatively about the data sets that are on the Internet.”

Given Griffith’s puckish sense of humor, I read “think creatively” as “be skeptical.”

His other well-publicized “be skeptical” project is WikiScanner a tool that that uses IP addresses to identify anonymous Wikipedia edits made from corporate and government domains. (In my mind the joke here is the idea that Wikipedia is trustworthy in any fashion, but that’s just me.)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Technology, Visual Explanation

February 27, 2009, 12:11 pm

Sliding House

By Henry Woodbury

From Wallpaper* magazine.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Design, Technology

January 20, 2009, 10:06 am

Goosing the Gray Lady

By Lisa Agustin

newjournalists090119_2_560

Interactive infographics and visualizations have been part of the New York Times’ online edition for some time; typical examples include the “Word Train,” an interactive mood database for collecting public opinion on Election Day, and “Casualties of War: Faces of the Dead,” a project merging photography, databases, audio, and graphics that marked the date U.S. military fatalities in Iraq reached 3,000 (both pictured above).

Now this week’s issue of New York Magazine features an article on how the Times’ Interactive Technologies Group came to be:

The proposal was to create a newsroom: a group of developers-slash-journalists, or journalists-slash-developers, who would work on long-term, medium-term, short-term journalism—everything from elections to NFL penalties to kind of the stuff you see in the Word Train.  This team would “cut across all the desks,” providing a corrective to the maddening old system, in which each innovation required months for permissions and design. The new system elevated coders into full-fledged members of the Times—deputized to collaborate with reporters and editors, not merely to serve their needs.

Most interesting to me is this idea that the roles of journalist and developer have merged at the Times, resulting in projects that aren’t window-dressing for articles, but offer new ways to explore and make the news more relevant to its readers.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Business, Technology, User Experience

January 16, 2009, 10:39 am

Gerrymander Away

By Henry Woodbury

Computers have arguably made the gerrymandering of U.S. Congressional Districts easier and more egregious. They should be able to make the problem go away. That is, if anyone can figure out an algorithm:

…it is surprisingly hard to define, or at least reduce to a set of rules, what a “gerrymandered district” is. Writing a formula for drawing districts requires us to define how funny-looking is too funny looking. And what is funny, anyway?

“The idea is that circles are the best shape for districts,” said George Washington University’s Daniel Ullman, talking about one school of thought. “Unfortunately, they don’t tessellate well.” This was apparently a joke, because the room burst out laughing. For the rest of the afternoon, the word tessellate never failed to produce giggles. (Tessellate means to tile together, as in an M.C. Escher drawing.)

Mathematicians and lawyers are focused on improving the reapportioning process coming up in less than two years. Another use of their analysis is simpler – to find the worst offenders and shame the politicians that put them in place. Is this too funny looking?

Illinois 4th District

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Current Events, Technology, Visual Explanation

January 12, 2009, 10:24 am

Ahead of Our Time?

By Matt DeMeis

I came across this video recently titled “Did You Know” that was created by Karl Fisch, Scott McLeod and XPLANE. It reminded me of a project dD created almost 8 years prior called “Global Village”. I dug around in our archive and after some careful cross converting and video capturing (the first generation ActionScript didn’t want to play nice), I was able to resurrect the presentation. Some of the sound effects were lost due to the age of the file but it’s enough to show the similarities between the two. It’s not as fancy as the 2007 “Did You Know” but the way the visual statistics are represented has much more of an impact. Have a look…

“Global Village” 1999-2000

“Did You Know” 2007

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Charts and Graphs, Infographics, Information Design, Technology, Visual Explanation

December 15, 2008, 1:56 pm

Manipulating the Historical Web

By Lisa Agustin

Zoetrope web crawler

You may be familiar with the Internet Archive (a.k.a. the WayBackMachine), an Internet library of 85 billion web pages that lets you search for a specific web site (including ones that are now defunct) to see how it looked on a given date in the past.  But while these historical views are interesting, their usefulness is limited since they only provide single, unconnected snapshots frozen in time.  Enter the Zoetrope web crawler, a system created by Advanced Technologies Lab at Adobe Systems.  With Zoetrope, users will be able to manipulate earlier versions of the web and generate visualizations of web data over time.  “Time lenses” can be used in different regions of a page, to see specifically how data in that section has changed over a specific period of time.  These lenses can even be combined to see the interrelation of data sets, enabling the user to explore cause-and-effect hypotheses (see the Zoetrope demo for an example of this).  Intended for the “casual researcher,” it’s easy to see how data junkies could spend hours with this application. Zoetrope’s creators expect to release the application for free next summer.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Technology, Web Interface Design

November 25, 2008, 12:01 pm

MIT Media Lab Announces Center for Future Storytelling

By Lisa Agustin

The traditional approach to storytelling is at risk, thanks to an attention-deficient lifestyle, and the technologies that feed into it, like text messaging and YouTube.  Now the MIT Media Lab has teamed up with Plymouth Rock Studios, a Massachusetts-based movie studio, to create the Center for Future Storytelling as a way to keep the storytelling process alive by revolutionizing it.  According to the MIT press release:

By applying leading-edge technologies to make stories more interactive, improvisational and social, researchers will seek to transform audiences into active participants in the storytelling process, bridging the real and virtual worlds, and allowing everyone to make their own unique stories with user-generated content on the Web. Center research will also focus on ways to revolutionize imaging and display technologies, including developing next-generation cameras and programmable studios, making movie production more versatile and economic.

The Center is expected to leverage technologies pioneered at the Media Lab, like digital systems that understand people at an emotional level, or cameras capable of capturing the intent of the storyteller.  While the movie-making world is expected to benefit directly from the Center’s research, it will be interesting to see how results might innovate the business world and the approaches companies use to tell their own unique stories.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Current Events, Technology

November 3, 2008, 3:48 pm

Microsoft Chart Advisor — Consider the Source

By Mac McBurney

The prototype Chart Advisor for Excel 2007 from Office Labs sounds like a step in the right direction:

This add-in uses an advanced rules engine to scan your data and, based on predefined rules, displays charts according to score. Top scoring charts are available for you to preview, tweak, and insert into your Excel worksheet.

An early post by Program Manager Scott Ruble describes the Excel team’s motivations, which at first glance seem admirable. On second thought, Ruble’s understated description of the group’s noble “intent” and responsiveness to strong feedback reminded me not to get my hopes up. (Emphasis and sarcastic comments added by me):

When Office 2007 was released [and not before then?], one of the strong pieces of feedback was Excel needs to do a better job guiding users in the proper selection of charts to effectively communicate their data. Though it wasn’t our intent [I feel so much better now], some of the new [and the old] formatting options [and defaults] such as glow and legacy 3D charts can [only] be used inappropriately, which obscure[sic] the meaning of a chart. Some people [silly, silly people] felt that these features contributed to creating more “chart junk.” In an effort to improve this situation, we have created a prototype called the Chart Advisor.

Mr. Ruble is being too modest. The new features and default settings — like the old features and default settings — guarantee more chart junk. This team wasn’t born on the day Office 2007 was released — quite the opposite. Saying that inappropriate use and obscuring the meaning of a chart was not the team’s intent seems, frankly, laughable.

I expect an upgrade from Microsoft to include new features — new things that users could do. Giving good advice about what a user should do is more difficult and risky, and it would ultimately be much more valuable. This is ambitious, and let’s hope it signals a greater focus on improving the real-world capabilities of Excel users, not just increasing the capabilities of the Excel software.

So far, Chart Advisor is in no danger of becoming an artificial Edward Tufte inside Excel. The add-in still serves a side order of chartjunk with your data.

Tim Mays reported that Chart Advisor ignored a whole column of source data and then (not surprisingly) recommended the wrong chart type. At first, Excel guru Jon Peltier didn’t even get that far.

Hey, that “advanced rules engine” is just a prototype. (More on the rules engine). If the wizards at Microsoft succeed in upgrading Excel’s brain, here’s hoping they have the courage to give it a heart and good taste as well.

Chart Advisor intro video

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Charts and Graphs, Information Design, Technology

October 29, 2008, 11:58 am

Improving Mobile Search

By Lisa Agustin

First, a confession:  I love my iPhone.  But using the touchscreen keyboard leaves me (and others as well) feeling annoyed and just a bit uncoordinated.  And when it comes to searching?  Not fun.  Why is mobile searching so hard?  The problem, in part, is a misconception that PDAs and phones are just small laptops.  Luckily, this mindset is changing.  Mobile technology companies are increasingly aware that technology by itself won’t fix the problem; the key will be using these solutions (voice-recognition,  leveraging of built-in cameras and, eventually, the semantic web)  to create intuitive user experiences.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, User Experience

October 23, 2008, 10:01 am

Don’t Eat the iPod Shuffle—Seven Years of iPod Design

By Kirsten Robinson

Wired has published a look back at iPod design, starting with this paper and foam core prototype from 2001:

one of the original iPod concepts

Check out the article to find out how the scroll wheel evolved over time, when color was first introduced (on the body and the screen), and where the title of this post came from.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Branding, Business, Color, Design, Prototyping, Technology, Usability

July 17, 2008, 10:31 am

The End of the Scientific Method

By Lisa Agustin

According to Chris Anderson at Wired, the scientific method is no longer relevant, thanks to the enormous amounts of data now at our disposal. The traditional (and sometimes imperfect) approach of testing hypotheses via modeling made more sense when scientists were trying to understand the underlying mechanisms that connect a handful of results. This is no longer the case:

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

Anderson gives a couple of examples to prove his point, including how new species of bacteria were “discovered” using high-speed sequencers and supercomputers (“a statistical blip”). The idea that data is the starting point, and relationships and rationale can be established later is not a new idea for data viz practitioners, but thinking about this approach in the context of dismissing other methodologies? I’m not so sure.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Technology

June 20, 2008, 10:32 am

A Radiated Library and the Televised Book

By Henry Woodbury

Still from The Man Who Wanted to Classify the WorldIn the canonical history of the origins of the Internet, Belgian Paul Otlet does not make an appearance. He was, perhaps, too early and too utopian, setting forth a plan for “a global network of computers” in 1934. Otlet’s vision derived from his life’s work creating the Mundaneum, a “universal bibliography” cataloged on index cards. For a fee, anyone in the world could mail or telegraph a request that Otlet’s small staff of professional librarians would investigate.

As the Mundaneum accumulated millions of entries, Otlet realized that his index-card-based system was becoming too cumbersome to manage. At that point he began working on ideas for electronic data storage and a totally paperless system — in his words, “a radiated library and the televised book.”

Eventually lack of funding and the onset of World War II doomed the project. Only recently have Otlet’s writings and the remnants of the Mundaneum archive begun to receive attention.

In this vein, The New York Times article linked above provides a history and appreciation of Otlet’s work, a visual explanation of the Mundaneum card cataloging system, and a clip from the documentary film The Man Who Wanted to Classify The World.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Visual Explanation

April 29, 2008, 12:06 pm

Of Wii and Waterproof Mattresses: Hotels Prototype New Ideas with “Test Rooms”

By Kirsten Robinson

The New York Times reported today that hotels are using “test rooms” to try out new designs and technology before implementing them throughout the hotel, saving vast sums by discarding or improving upon ideas that don’t work. New technologies being tested include waterproof mattresses, digital door panels, customized Wii consoles, and even wireless electricity. But sometimes the greatest need is to make sure the existing features are usable. One guest who tried out a test room commented that he could not figure out the alarm clock or how to turn on the television. “All I wanted to do was watch CNN,” he said.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Business, Prototyping, Technology, Usability, User Experience

April 1, 2008, 3:23 pm

Standards vs. Compatibility

By Henry Woodbury

Joel Spolsky offers a look ahead at Microsoft Internet Explorer 8. What he foresees is a web developer flamewar.

Headed by developer Dean Hachamovitch, the MSIE 8 team has decided to move its default mode away from MSIE 7 compatibility and closer to web standards. Spolsky offers a long quote from Hachamovitch’s announcement of this decision, but it boils down to this:

We’ve decided that IE8 will, by default, interpret web content in the most standards compliant way it can.

This means that some HTML pages coded to take advantage of some of MSIE 7′s quirks will break in MSIE 8.

This is a problem? It shouldn’t be.

Barring the introduction of any new quirks (say a new way to misinterpret the box model), there’s no reason any Web site HTML and CSS should break in MSIE 8. If a web site has been tested against MSIE 6, MSIE 7, Firefox, and Safari (as are all of our public-facing projects), and if its developers have used a robust HTML structure and the subset of mutually-supported CSS styles (rather than browser-sniffing to write specialty CSS), then the odds of that site rendering incorrectly in MSIE 8 should be very small.

JavaScript-driven functionality, however, is harder to predict. Here, I rely on the folks behind Prototype and jQuery to handle MSIE 8 so I won’t have to. We’ll see how that goes.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Implementation, Technology, Web Interface Design

March 10, 2008, 11:40 am

Let the Penguin Explain

By Henry Woodbury

In a few weeks an AOL penguin will begin educating users about advertising cookies. Here’s a sample storyboard from the ad campaign:

Frame 4 of 7: An ad company sends a cookie to Mr. Penguin's computer, recording his visit.

A penguin?

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Illustration, Technology, Visual Explanation

March 10, 2008, 9:00 am

What Does “Capable” Mean in Redmond?

By Henry Woodbury

Today’s number one most emailed article from the The New York Times home page is about operating systems, of all things. Specifically, it is about users who upgraded to Windows Vista and “got burned.” Users like Mike Nash, a Microsoft Vice President, and Jon Shirley, a Microsoft board member.

These stories come from Microsoft internal emails, acquired in a class action law suit. At the heart of the dispute is disagreement about the meaning of the word “capable.”

Originally Microsoft planned to label Windows XP PCs with sufficient hardware and graphics power to eventually run Vista as “Vista Ready.” To avoid hurting sales of lower-end computers, Microsoft created a new classification, “Vista Capable.” This supposedly “signal[ed] that no promises are made about which version of Vista will actually work.”

An internal Dell report exposes the folly of this idea: “Customers did not understand what ‘Capable’ meant….”

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Current Events, Technology

February 8, 2008, 6:08 pm

Logo Evolution

By Henry Woodbury

From the Neatorama blog comes an interesting exhibit of tech company logos and their changes over time. The companies range from Apple to IBM to Nokia to Palm, offering an engaging contrast between start-ups professionalizing their brand and manufacturing firms reinventing their business. IBM, for example:

IBM Logos

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Branding, Business, Technology

November 27, 2007, 12:05 pm

The Slow Death of the Technical Specification

By Lisa Agustin

The days of the web developer’s technical spec are long gone, writes columnist Richard Banfield: “In a world of intensely visual design, we have to ask why we still need to write massive documents to describe web products that real people will use.” According to Banfield, there was a time when it made sense to document everything before starting any software development, and that this way of doing things was largely a result of limited technology and lower design costs. These days, developing a web site or application demands a more agile approach–one in which visual tools play a key role:

“Once the priority of a project is established, the team should immediately move toward visualizing that idea. This can take many forms, but we have found that whiteboards and large pieces of paper work wonders to get everyone on the same page. Nothing slows down the creative process like a 60-page document, complete with spreadsheets and appendices.”

This has been our experience as well. While some engagements do require some type of written narrative — especially in cases where there needs to be a more detailed explanation of the application for a broader group outside of the development team — we’ve seen immense value in translating requirements into a visual form during all phases of a project. I would take Banfield’s comments a step further by suggesting that visuals are not just helpful tools, but can often replace specification documents as deliverables. Diagrams (for expressing high-level user experience), process flows (for explaining complex transactions), and heavily annotated wireframes (for describing functionality at the page-level) are “closer to reality” than a Word document that describes them. This makes the idea behind an application easier to understand and discuss, leading a group to consensus about direction much more quickly.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Implementation, Technology

October 4, 2007, 1:28 pm

Visualizing Digg

By Lisa Agustin

Digg ArcMaking sense of the activity on Digg is the mission behind Digg Labs. The Labs offer four different views of Digg data: Arc (shown at left), BigSpy, Stack, and Swarm. Like the Digg site itself, each visualization tracks similar information, including the newest stories that users “digg,” story popularity (number and frequency of “diggs”), and the names of “diggers” themselves. Best of all, the visualizations are in real-time, making the energy and behavior of the Digg community a palpable one. But while the tools give a new perspective on Digg activity, they fall short on helping users see any obvious patterns or draw specific conclusions. Some critics even consider them confusing. Despite the criticism, these data visualizations have provided direction on how to improve the Digg user experience, according to Digg creative director Daniel Burka:

“After seeing users congregate around stories and examining their relationships, we’ve tweaked our algorithms to take [content] diversity into account when determining how popular a story really is,” Burka says. This allows a wider range of subjects to show up on the home page, for example. “Many of the lessons we’ve learned in the Labs are also influencing future feature development and the general direction of the site.”

An article in Technology Review offers further details on Digg Labs: http://www.technologyreview.com/Infotech/19079/?a=f

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Information Design, Technology, Visual Explanation

August 23, 2007, 11:03 am

How Google Works

By Henry Woodbury

Condé Nast Portfolio offers this “infographic” on How Google Works. (The text version is here.)

Interesting stuff, and nicely visualized — especially step 3 on “The Cluster”.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Visual Explanation

July 27, 2007, 10:32 am

Map Markup

By Henry Woodbury

The New York Times takes note of internet mapping tools, highlighting the non-expert angle:

“It is a revolution,” said Matthew H. Edney, director of the History of Cartography Project at the University of Wisconsin in Madison. “Now with all sorts of really very accessible, very straightforward tools, anybody can make maps. They can select data, they can add data, they can communicate it with others. It truly has moved the power of map production into a completely new arena.”

Most of the sample maps linked by the article are better described than seen, for the actual visual product is a cookie-cutter hodge-podge — often just a Google or Microsoft Map overlaid with clunky icons. While this is a new way to serve up data, it is not really a new approach to mapmaking. Many local, printed trail guides, for example, benefit from the contributions of non-experts, hikers who annotate U.S. Geological Survey Maps with descriptions of trail markers and landmarks. I remember my dad planning cross-country vacations with end-to-end road maps and highlighters. Anyone always could — and did — make maps, they just couldn’t share them as easily.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Maps, Technology, Visual Explanation

July 20, 2007, 9:55 am

Is Marketing the New Finance?

By Henry Woodbury

Here is economist Hal Varian, interviewed by the Wall Street Journal:

WSJ: In the past, promising new economics PhDs who didn’t want to work in government or academia probably aspired to work on Wall Street. In the future, will they aspire to work at companies like Google?

Varian: I think marketing is the new finance. In the 1960s and 1970s [we] got interesting data, and a lot of analytic fire power focused on that data; Bob Merton and Fischer Black, the whole team of people that developed modern finance. So we saw huge gains in understanding performance in the finance industry. I think marketing is in the same place: now we’re getting a lot of really good data, we have tools, we have methods, we have smart people working on it. So my view is the quants are going to move from Wall Street to Madison Avenue.

And it’s all thanks to Google. According to Varian, the business model for search is not much different than the business model for publishing. The difference comes from Google’s ability to manage pricing on a real-time basis and thus transact an enormous amount of data. Here’s Varian again:

Adaptive forecasting, how I revise my forecast to take account of updated information, you use that a lot on Wall Street, where you have time series of stock prices. And some of those things carry over into things that Google is doing, that have this real-time flow of data. How do I detect unusual events, and react to them?

The internet gives us an engine. Now we start figuring out what fuels it.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Marketing, Technology

June 1, 2007, 2:18 pm

In Which No One Knows What They Want

By Henry Woodbury

James Surowiecki writes about feature creep in a recent New Yorker column. He starts by naming the usual suspects: engineers devoted to custom tweaks, marketers enticed by more selling points.

But feature creep goes beyond the failure of the internal audience:

You might think, then, that companies could avoid feature creep by just paying attention to what customers really want. But that’s where the trouble begins, because although consumers find overloaded gadgets unmanageable, they also find them attractive. It turns out that when we look at a new product in a store we tend to think that the more features there are, the better. It’s only once we get the product home and try to use it that we realize the virtues of simplicity.

Thus it falls to designers to aggressively promote simplicity (over everyone’s objections).

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Business, Technology, Usability

May 10, 2007, 9:09 pm

Hyperbolic Views: Mapping the Blogosphere

By Mac McBurney

map of the blogosphereDiscover Magazine discusses a series of maps of the blogosphere created by Matthew Hurst.

Discussion of the pros and cons will have to wait for another day. Until then, here are two more hyperbolic tree visualization examples:

Interactive Tree View of the LexisNexis Directory of Online Sources
National Science Digital Library

Tell us what you think.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Information Design, Technology, Visual Explanation, Web Interface Design

April 25, 2007, 9:31 am

The Entrepreneurs of Simplicity

By Henry Woodbury

My father recently recommended to me a book on technology entrepreneurs: Founders at Work: Stories of Startups’ Early Days by Jessica Livingston. Founders at Work is a collection of interviews with technologists from Steve Wozniak (Apple) to Caterina Fake (Flickr).

Here are some quotes Dad sent my way (all of a theme):

I think I was also surprised by the success of something so simple.  [W]hat we built wasn’t that amazing.  It was the idea of putting a couple of things together and being able to establish a lead by doing something really, really simple. How far you can get on a simple idea is amazing…. ~Evan Williams, Co-founder, Pyra Labs (Blogger.com)

Do as little as possible to get what you have to get done….Doing less is so important.  People often wind up adding features, adding stuff.  Making it bigger is the typical way you engineer out of a problem, right?  It’s the traditional, ‘I apologize for the long letter.  I didn’t have time to make it shorter.’ “ ~Joshua Schachter, Founder, del.icio.us

‘We’re making a product for mom and dad. Some of the features that you think we should add may not be the ones that they want to use. ….. It’s hard to convince 500 flesh-and-blood developers that their pet feature may not be desirable to 500 million imaginary users.” ~Blake Ross, Creator, Firefox

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Business, Technology

April 5, 2007, 3:56 pm

The Neurological Case for Diagrams

By Henry Woodbury

Researchers at the University of New South Wales say the brain is not equipped to read and listen at the same time:

The findings show there are limits on the brain’s capacity to process and retain information in short-term memory.

John Sweller, from the university’s faculty of education, developed the “cognitive load theory”.

“The use of the PowerPoint presentation has been a disaster,” Professor Sweller said. “It should be ditched.”

It is effective to speak to a diagram, because it presents information in a different form. But it is not effective to speak the same words that are written, because it is putting too much load on the mind and decreases your ability to understand what is being presented.” (my emphasis)

Powerpoint is everyone’s favorite target these days, but of course, it’s how people use Powerpoint that is the problem.

Also interesting: People learn by studying already solved problems. Learn a solution and you have a better chance of applying it the next time you run into a problem.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Visual Explanation

April 4, 2007, 12:03 pm

Netscape is Number 1…

By Henry Woodbury

…on PC World’s list of the 50 best tech products of all time:

Netscape was the reason people started spending hours a day on the Internet, leading to the boom (and bust) of many a Web site. The advent of the browser also led to the U.S. Department of Justice’s antitrust suit against Microsoft, after the company embedded Internet Explorer into Windows. And Netscape’s August 9, 1995, IPO is universally considered to be the official start of the dot-com era.

It’s all there: popularity, impact, influence.

Are there any “aha” moments in the list? Instead of one-and-done devices like the Zip drive (#23), how about TurboTax (#38)? Now that’s a piece of software with ongoing impact. Once software handles the tax code (on the front-end and the back) it changes how the tax code can be permitted to change.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

March 28, 2007, 1:27 pm

Microsoft on Channel 9

By Henry Woodbury

We’ve linked to Channel 9 before (see here). With its blog format and aggressive comments section, you might not guess it was sanctioned by Microsoft — until you notice the “Microsoft Communities” bar at the top and the little “msdn” in the URL.

Now Wired describes how Channel 9 was born and how it has generated great PR for the company normally viewed as centralized, bureaucratic, and secretive:

…marketers say [Microsoft] has become the model for how corporations can use the Internet to manage their image. “The messages coming out of Microsoft used to be so one-dimensional and managed,” says John McKinley, who until the end of 2006 was CTO and head of digital services for AOL. “Now you can get four clicks into the organization and see engineers talking about products. It gives Microsoft a human face.”

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

March 7, 2007, 8:49 pm

“PowerPoint gives the game away”

By Henry Woodbury

PowerPoint despair makes it to the Guardian Unlimited, in this essay by Jonathan Wolff:

What is it about PowerPoint? Perhaps it is the only thrill left to the jaded academic: not knowing whether the technology you are using will actually allow you to give your talk.

While Wolff mocks the dog-and-pony-show marketing of PowerPoint, he focuses on a larger point:

For those who prefer to project the idea that a talk is a unique event, a voyage of discovery that could go in any one of a number of directions, and may well go in all of them, PowerPoint gives the game away. As someone once said: “The art is hiding the art.” With PowerPoint, everything is on display. Elegantly effortless performance is hard enough as it is. PowerPoint makes it impossible.

As another well-known detractor points out, PowerPoint is relentlessly sequential, undermines a presenter’s ability to present rich data in context, and sets up “a speaker’s dominance over the audience.”

I doubt Edward Tufte is going to change his mind, but if Wolff ever watches Steve Jobs at work he might acknowledge that elegantly effortless performance with presentation software is possible.

Okay, so Jobs uses Keynote. But it’s not the software that makes the difference. It’s the approach.

We do a lot of work in PowerPoint. We have two fundamental strategies for creating elegant presentations. First, we approach the entire presentation as a single narrative or composition. Each slide is a storyboard that advances the theme. This lets us leverage PowerPoint’s sequential format to our advantage. We can set up suspense in one slide and resolve it in another. We can establish a motif, then evoke it again and again. We can use pattern and variation.

Second, we treat every slide as a potential visual explanation. Sometimes all you need is text, but with images you can represent concepts, show connections, and evoke emotion. Images also make presenters inherently more interesting. Instead of repeating bullet points on a screen (which people can read for themselves), the presenter speaks to that which the audience sees.

But Tufte and Wolff cannot be ignored. Sometimes the multimedia presentation is simply a bad choice of format. Let us give Wolff the last word. Referring to the power of the image (say, the portrait of a famous philosopher) he writes:

These days, of course, digital pictures of Descartes are cheaper than ten-a-penny, but I’m still unsure of the benefits of showing his bony face to the audience. They have already got me to look at. And if they are looking at me, rather than a screen, I can look back at them. And I can judge whether they have understood what I have just said, and, if not, have another go at making the point.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (3) | Filed under: Marketing, PowerPoint, Technology, Visual Explanation

March 1, 2007, 9:33 pm

How Digg Works. Or Not.

By Henry Woodbury

What is Digg?

Digg is all about user powered content. Everything is submitted and voted on by the digg community. Share, discover, bookmark, and promote stuff that’s important to you!

Like a search engine, the Digg engine — trading in its own version of hits — invites optimizers. In Wired News, Annalee Newitz writes how she created an intentionally pointless blog, then promoted it on Digg using a paid service:

If the corporate brass at Digg were right, this would be a complete waste of my money. CEO Jay Adelson told me before I conducted this experiment that all the groups trying to manipulate Digg “have failed,” and that Digg “can tell when there are paid users.” Adelson added, “When we identify a (Digg user) who is part of a scam, we don’t remove their account so they don’t realize they’ve been identified. Then we let them continue voting, but their votes may count a lot less. Then the scam doesn’t work.”

What’s most interesting about Newitz’s story isn’t that Digg can be gamed. It’s that her pointless blog made the popular list because authentic Digg users added their honest votes to her paid ones:

Despite their doubts, Diggers kept digging my blog. There’s a perverse incentive here: Diggers who vote early on stories that become wildly popular become more “reputable” in the Digg system. If you’re trying to move up the Digg ranks, it’s in your best interest to vote on anything that looks like it’s gaining popularity. And my blog, with its flurry of paid votes, fit the pattern.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

February 26, 2007, 9:35 am

Visual Identity: Identicon

By Lisa Agustin

Reading a string of comments on a blog is not the most stimulating user experience.  Moreover, if a blog post is riveting enough to start an online conversation via comments, following the exchanges between participants may require closer reading to see who said what.  Enter the Identicon.  Programmer Don Park developed the Identicon as a way of enhancing the commenter’s identity by using a privacy protecting derivative of each commenter’s IP address to build a 9-block image to identify the writer. Referred to in its debut as “IP-ID,” the Identicon is written in Java and based on the first four bytes of SHA-1 (Secure Hash Algorithm).  The Identicon’s visualization consists of a small quilt of 9 blocks that uses 3 types of patches, out of 16 available, in 9 positions. To try this yourself, visit Park’s blog and scroll down to the comment form, which will display your current Identicon. Mine at the time of this writing:

lisa identicon

How it works: the Identicon code selects 3 patches: one for center position, one for 4 sides, and one for 4 corners. There are additional details in the code for determining positioning, rotation, color, and inversion of the blocks.

For users with dynamic IP addresses, their Identicons will change over time.  However, according to Park, it doesn’t appear to change often enough to affect identification beyond a “typical comment activity cluster” (presumably a single session during which a comment might be posted). Park adds:

I originally came up with this idea to be used as an easy means of visually distinguishing multiple units of information, anything that can be reduced to bits. It’s not just IPs but also people, places, and things. IMHO, too much of the web what we read are textual or numeric information which are not easy to distinguish at a glance when they are jumbled up together.

Besides the intended purpose of identifying individual users among a sea of many (e.g., wiki authors, customer tracking in CRM tools, etc.), there may be other uses as well, such as identification of individual computers within a large network.  Plus the Identicon seems to be gaining in popularity: a PHP version is now available, as well as one that works for WordPress.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Implementation, Technology, Visual Explanation

February 14, 2007, 1:36 pm

Learning Curve

By Henry Woodbury

“If it had been that straightforward I wouldn’t have called helpdesk”

http://www.youtube.com/watch?v=4pyjRj3UMRM

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Technology, Usability

February 1, 2007, 10:16 am

Your. User. Is. Not. You.

By Henry Woodbury

That advice from computer science instructor David Platt could be carved in stone. It pretty much applies to everyone that makes anything for other people, but Platt has a particular target in mind. Programmers, he asserts, don’t think like users:

People who write software programs value control. The user, on the other hand, just wants something that’s easy to operate.To illustrate his point, he notes that computer programmers tend to prefer manual transmissions. But not even 15 percent of the cars sold in the United States last year had that feature.

Business executives don’t think like users either. Frankly, users don’t think like users. Here’s David Thomas, executive director of the Software & Information Industry Association’s software division:

You don’t want your customers to design your product. They’re really bad at it.

What you want to do is ask people what they want, then compare it to what they actually do.

Platt’s Suckbusters web site is here. A typically entertaining lede:

The common technique of confirmation, popping a dialog box into the user’s face and asking, “Are you really Really REALLY sure you want to do that?” is evil. It’s unfriendly, it’s distracting, and it’s completely ineffective. Have you ever, even once, said, “Whoa! I didn’t want to do that. Thanks,” and clicked No? Have you seen anyone do that? Have you even heard of anyone doing it? I haven’t. It shouldn’t exist. Anywhere. Ever.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (3) | Filed under: Business, Technology, Usability

January 23, 2007, 9:04 pm

The Future of Gesture UIs

By Lisa Agustin

Without fail, the start of the new year gets people thinking about What Will Be Big This Year.  The latest issue of Digital Web Magazine features an interview with Doug Bowman, a Visual Design Lead with Google, in which DWM asked which apps from 2006 are most significant and what that means for 2007.  Aside from the expected endorsements of Google’s Calendar and Spreadsheets, Bowman had some interesting comments touching upon the themes of selective content sharing (e.g., Six Apart’s Vox) and more consolidation (e.g., Yahoo! Mail).

But what piqued my interest the most were Bowman’s comments regarding “gesture user interfaces,” or UIs that are driven by physical movements of the user.  This is not a new thing, of course–dragging and dropping is something that most users accept (maybe even expect) with the latest applications. But recent offerings like the Nintendo Wii and the Reactrix interactive advertising display are giving us glimpses into what user experience may hold for the future. (Okay, so maybe the holographic screen in that Tom Cruise movie wasn’t completely off the mark?)  What I find most interesting about gesture UIs is not so much what the final user experience will be for gesture-driven apps, but how would you architect and then document the desired experience?  What kinds of description languages will need to be developed to describe the experience programmatically? What kinds of new user input paradigms will emerge moving forward? Stay tuned.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Design, Implementation, Technology, User Experience

January 19, 2007, 2:18 pm

The Swiss Army PC Toolbox

By Henry Woodbury

Swiss Army CybertoolIf you’re not sure when or where you might need to service a computer, this may be the tool you need. It features “bit wrenches, hex sockets, torx, hex, and pozidrive bits, screwdrivers, pen, pliers, wire tools, and more”. The corkscrew is for hard drive failure.

In a related vein, Victorinix also sells a knife with a fold-out 1GB memory stick.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

January 8, 2007, 11:07 am

The Secret Weapon of Product Designers

By Lisa Agustin

This month’s issue of Fast Company offers a peek into the DesignAid kit, a collection of twenty inventions with “unexpected properties,” such as impact-absorbing silicon (useful for building a sturdier car bumper) or sound-recording paper (consider a talking postcard).  Created by Inventables, the kit changes quarterly and offers product designers a peek at some unusual technologies along with suggestions for various applications. Kit recipients can decide whether any of the offerings might be somehow integrated into their own products, or just use the kit as a source of inspiration for innovative thinking.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Design, Technology

December 6, 2006, 10:57 am

Spamalot

By Henry Woodbury

Spam is back. According to this New York Times story (free registration required), existing filters are being fooled by “image spam,” in which telltale advertising phrases are presented in bitmaps instead of text. As antispam companies have added optical character recognition to their solutions, spammers have added speckles and dots to fool the scans.

To fool other spamblocking techniques, spammers have vastly expanded the practice of using viral “spambots” to send spam from the computers of unsuspecting users and have developed ways to give each copy of a spam message a unique digital “fingerprint.”

As for the stock tips you’ve been getting, here’s the scam:

Spammers buy the inexpensive stock of an obscure company and send out messages hyping it. They sell their shares when the gullible masses respond and snap up the stock. No links to Web sites are needed in the messages.

Though the scam sounds obvious, a joint study by researchers at Purdue University and Oxford University this summer found that spam stock cons work. Enough recipients buy the stock that spammers can make a 5 percent to 6 percent return in two days, the study concluded.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

November 27, 2006, 12:12 pm

Paper as an Electronic Storage Media

By Henry Woodbury

Sainul Abideen, an Indian engineering student has created a new data-storing technology in which electronic files are converted to geometric shapes and printed in dense patterns on ordinary paper. These “Rainbow Technology” sheets can then be read via a customized scanner and decoded into the original files. An A4 sheet (8.27 x 11.69″) can store up to 256GB, making this an extremely affordable storage technology — Abideen’s “Rainbow Versatile Disk” has a storage density greater than high-end DVDs, uses less raw material to manufacture, and is biodegradable.

The idea of paper-based storage creates intriguing possibilities for data distribution. Imagine this: You buy a newspaper, tear out an RVD swatch, insert it into a RVD-capable device (an MP3-player, a cell phone, a PDA), and listen to the audio version of the printed text. Or listen to some new music. Or watch movie trailers.

Update: Commenter DD points to a Wikipedia article that casts doubt on the accuracy of the news report summarized above.

As referenced by Wikipedia, here’s Jeremy Reimer’s debunking. He points out that print resolution and scanner technology likely limits the storage capacity of a single A4 sheet to 100MB after error correction. Reimer also illuminates the importance (and limitations) of Abideen’s use of geometric shapes to render data:

The claim that “circles, triangles, and squares” can achieve … extra orders of magnitude can be easily challenged. There is a word for using mathematical algorithms to increase the storage space of digital information: it’s called compression. No amount of circles and triangles could be better than existing compression algorithms: if it was, those formulas would already be in use!

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (2) | Filed under: Technology

November 16, 2006, 11:21 am

AOL Goes Web 2.0

By Henry Woodbury

David Pogue at the New York Times reports on AOL’s embrace of the Web 2.0 bubble and “the business plan known as free”:

AOL had been losing members at a staggering rate, with 300,000 people a month canceling their AOL accounts as they switched to high-speed Internet from their cable and phone companies. AOL now has fewer than 18 million members, down from 35 million in 2002.

So AOL decided to get out of the Internet service-provider game, a dead-end business for a company that doesn’t actually own the wires running to your home.

AOL’s plan is to grow like Google:

Since it went free, AOL has lost 2.5 million paying subscribers — but gained 3 million free members. That’s more people looking at the ads, which AOL figures will attract even more advertisers.

Like Google, AOL is rolling out free goodies, of which Pogue has a nice list.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

October 31, 2006, 2:07 pm

Computer Culture

By Henry Woodbury

Computers may still be binary calculating machines, but their social impact is profound. According to a New York Times report on the Computer Science and Telecommunications Board “2016″ symposium, computers have become so integrated into scientific and popular culture as to drive qualitative changes in how people interact — and how social scientiest can study them:

The new social-and-technology networks that can be studied include e-mail patterns, buying recommendations on commercial Web sites like Amazon, messages and postings on community sites like MySpace and Facebook, and the diffusion of news, opinions, fads, urban myths, products and services over the Internet. Why do some online communities thrive, while others decline and perish? What forces or characteristics determine success? Can they be captured in a computing algorithm?

Don’t miss the “a Web Site as a Living Organism” diagram linked to the article. The format is a fairly typical node map, but adroit display of multiple properties of each node makes for an engaging graphic.

http://www.nytimes.com/2006/10/31/science/31essa.html (free registration required)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (1) | Filed under: Current Events, Technology, Visual Explanation

September 18, 2006, 9:10 am

Video Mash-Up Approval

By Henry Woodbury

YouTube, not yet profitable and a potential target for copyright violation law suits, has signed a big partner:

Under a revenue-sharing deal announced Monday, New York-based Warner Music has agreed to transfer thousands of its music videos and interviews to YouTube, a San Mateo, Calif.-based startup that has become a cultural touchstone since two 20-something friends launched the company in a Silicon Valley garage 19 months ago.

Unlike the notorious music sharing programs, YouTube offers value that entertainment companies want to leverage, not squelch:

Perhaps even more important for YouTube is that Warner Music has agreed to license its songs to the millions of ordinary people who upload their homemade videos to the Web site….

To make the deal happen, YouTube developed a royalty-tracking system that will detect when homemade videos are using copyrighted material. YouTube says the technology will enable Warner Music to review the video and decide whether it wants to approve or reject it.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

September 11, 2006, 3:48 pm

Wikipedia Will Not Restrict Content

By Henry Woodbury

The Chinese government has blocked access to Wikipedia since last October. Founder Jimmy Wales has declared that Wikipedia will not compromise its standards and called for other Internet companies to follow suit:

Wales said censorship was ‘antithetical to the philosophy of Wikipedia. We occupy a position in the culture that I wish Google would take up, which is that we stand for the freedom for information, and for us to compromise I think would send very much the wrong signal: that there’s no one left on the planet who’s willing to say “You know what? We’re not going to give up.”‘

Good for Wales.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Technology

September 6, 2006, 1:53 pm

Media Entrepreneurs in Training

By Henry Woodbury

The film school at Arizona State University is just over a year old. Instead of competing with established programs that train students in editing, cinematography, writing, and directing, the ASU film and media studies program focuses on the intersection of entertainment with new technologies.

Dr. Peter Lehman explains:

“The digital age, and that 800-pound distribution gorilla, the Internet, is changing everything,” he said. “The technical people and the creative people need to be able to work together, and there is no forum for that now.”

While “entertainment technology” sounds like a reference to digital content creation — that world of three-dimensional animation, game design, and digitized special effects — the ASU program has a much more conceptual approach. Student Alex Baer, an 35-year-old software executive, explains he took the program’s introductory course to “find out what we need to know about the narrative form when presenting it on different screens.”

In a world where Over The Hedge competes with YouTube, that would be a valuable thing to know.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

August 31, 2006, 2:13 pm

Your Blog, Coffee-Table Version

By Henry Woodbury

Blurb.com is testing a service that “slurps your blog right into a slick coffee-table book, professionally designed and bound to attract attention.” The goal, says Blurb CEO Eileen Gittins, is “to position Blurb authors at the forefront of an increasingly digital publishing landscape.”

Economics professor Tyler Cowen is unimpressed:

Translating good blog ideas into book format is best done by people who…have experience writing books, or who have journalistic experience, not by people who have large staplers.

It’s hard to take the Blurb “position” seriously enough even to knock it. Sounds like Blurb has some cool publishing technology, but when your output is business briefs, baby books, and pet portfolios you’re not exactly competing with HarperCollins.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

August 21, 2006, 9:59 am

Politics Plays on YouTube

By Henry Woodbury

Digital video and a place to publish it means political gaffes don’t fade away. Instead, they show up on YouTube for endless replay. In addition to capturing the unscripted errors of politicians, partisans can piece together candidate quips with other images and post their own mini-biopics — supportive or not.

The debate among political analysts is whether Internet video will make public figures even more preprogrammed, or whether it will encourage them to loosen up, show their personalities, and communicate more directly.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Technology

August 16, 2006, 8:08 pm

Medical Products and the User-Centric Experience

By Lisa Agustin

This week’s Innovation column in BusinessWeek Online features an interview with Stuart Karten, principal of Stuart Karten Design, an industrial design firm known for its user-centric approach to product design. The interview focuses specifically on Karten’s experience designing medical products, including a bone marrow biopsy needle, an infant ventilator, and a defibrillator.

Karten’s approach to medical product design extends beyond form following function, taking into account not only the product itself, but the context in which it will be used. On the question of what makes for a successful defibrillator, Karten notes:

What we realized is the actual frequency of use is really low, but when you have to use one, your adrenaline is pumping and you’re in a very highly charged state. So the ability to educate prior to use is important, and in this case we’re designing a public defibrillator, so we’re thinking about it like a public health service announcement.

Karten’s research techniques are familiar ones to information design practitioners, and include interviews and direct observation of the user interacting with the object (user testing, anyone?). It’s yet another example of how understanding and improving the user experience is the key to creating a successful product.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology, Usability

August 11, 2006, 9:46 am

Reverse-Engineering Utopia

By Lisa Agustin

It’s time to catch up on summer reading. The Knowledge@Wharton site offers an excerpt from Idealized Design: How to Solve Tomorrow’s Crisis…Today, in which authors Russell L. Ackoff, Jason Magidson, and Herbert J. Addison propose what seems to be a simple idea: “the way to get the best outcome is to imagine what the ideal solution would be and then work backward to where you are today.” According to the authors, this “ensures that you do not erect imaginary obstacles before you even know what the ideal is.”

The book is based on the collective experiences of the authors. Ackoff’s seminal experience began on a side trip he took in 1951 to visit an acquaintance at Bell Labs. While there, he inadvertently became part of an all-hands meeting called to innovate the telephone communications system–a system that had not introduced a revolutionary contribution since 1900.

Tasked with improving the system as a whole rather than its individual parts, the six sub-system teams were instructed to design whatever integrated system they wanted, subject to only two constraints: technological feasibility and operational viability.

Interestingly, Ackoff noted that after his involvement ended and these design teams continued their work:

They anticipated every change in the telephone system, except two, that has appeared since then. Among these are touch-tone phones, consumer ownership of phones, call waiting, call forwarding, voice mail, caller ID, conference calls, speaker phones, speed dialing of numbers in memory, and mobile phones. They did not anticipate photography by the phone or an Internet connection.

Ackoff’s description of how the teams approached this challenging task contained two elements worth noting: an early phase of analyzing existing system problems and establishing users’ needs or requirements, and then working with each sub-system team to get a better understanding of how suggested improvements would impact the larger system. Above all, this approach reveals that creative thinking combined with a rigorous analytical process can result in big changes.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Business, Technology

July 28, 2006, 12:24 pm

What Makes A Successful Blog?

By Lisa Agustin

New York Times technology columnist David Pogue recently posted his interview with (in)famous blogger Ana Marie Cox, the original editor behind Wonkette, a behind-the-scenes look at political happenings and gossip in Washington, D.C.

Now the Washington editor for Time.com, Cox offered her take on the popularity of blogging and why the number of blogs continues to skyrocket:

[It] has a very low bar to entry. But the reason why anyone does it, I think, has to do with, like, having an opinion you believe is worth other people hearing, and having something to say beyond to the three or four people you talk to every day. And I think that’s why people get into journalism. And so it sort of would be a little odd if, given a chance to talk to a couple million people, rather than a couple hundred thousand people, you said no.

As for how to be successful, Cox suggests that would be bloggers have a “strong, defined personality with a sense of humor about themselves. An ability to filter news quickly and to recognize…what is interesting to other people as well as interesting to themselves, and finding the balance between those things.”

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

July 21, 2006, 4:04 pm

The World Cup on Mobile Phone

By Henry Woodbury

It’s like the early days of Web design, but more so. This Design Interact article describes how Yahoo planned and delivered its mobile device site for the 2006 World Cup. The goal was to make a site that could work on as many browser-enabled phones as possible. The problem was the baffling idiosyncrasies of those devices:

“The Web browsers on phones vary from basic to super basic,” explains Keith Saft, senior interaction designer at Yahoo! Mobile. “They also have these eccentric bits of HTML and CSS that they don‘t support, and there aren‘t really any standards or consistency across phones.

As they catalogued the technical limitations of mobile browsers, the Yahoo team created a design strategy that prioritized usability:

With production also came usability testing. And here, surprisingly enough, the team did not try to achieve perfect layout and content consistency on every phone. Instead, it wanted to make sure that users understood something it called “design intent.

Do users navigate efficiently through the site? Do they understand how items are grouped on a screen? Can they retrieve the information they want? “Design intent” is design by information architecture.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Implementation, Information Architecture, Sports, Technology, Usability

July 8, 2006, 8:31 am

Flash Takes Over Video

By Henry Woodbury

With YouTube, Google Video and other Web sites using Flash as their video format, the animation player has leapfrogged over more established competitors:

Flash has soared from zero to No. 2 in its market in just two years, according to Paul Palumbo, research director for Accustream iMedia Research. Microsoft’s Windows Media format is the leader, handling 60 percent of all streaming video in 2005; Flash has 19 percent of the market, jumping ahead of RealNetworks at about 10 percent and Apple’s QuickTime, with about 8 percent.

“Flash is going to be dominant,” Palumbo said. “You can embed this into the Web page and it’s instantly ‘on.’ It’s a seamless process.”

The fact that Flash is embedded in the browser also means that it “plays nice” with other programs. It does not attempt to establish itself as the default video application on your system. Nor does it relentlessly bug you to upgrade to a “pro” version.

Seamlessness is a marketing decision, not just a design decision.

(hat tip: Paid Content)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

June 20, 2006, 1:05 pm

Designing On a (Really) Small Scale

By Lisa Agustin

Nanotechnology is science and engineering at the scale of atoms and molecules. Think about these futuristic-sounding scenarios, described by the New Scientist:

Imagine a world where microscopic medical implants patrol our arteries, diagnosing ailments and fighting disease; where military battle-suits deflect explosions; where computer chips are no bigger than specks of dust; and where clouds of miniature space probes transmit data from the atmospheres of Mars or Titan.

Now think about what would be involved in designing these materials and devices–objects that are so tiny that nothing can be built any smaller. The NS Technology blog recently posted a link to NanoEngineer 1, software that lets nanoengineers create moving blueprints for their nanoscale designs. The NanoEngineer site’s gallery of animations includes intricate gears and bearings, among them a first-time simulation of the Drexle-Merkle Differential Gear. (A much larger version of this kind of gear lets the wheels on a car rotate at different speeds as it goes around a corner.) While the static model did a good job of describing the gear’s internal assembly, the animation adds another level of understanding to how the various components work together.

For the New Scientist Technology blog: http://www.newscientist.com/blog/technology/2006/06/nanoengineers-toolbox.html

For more information on nanotechnology: http://www.newscientisttech.com/channel/tech/nanotechnology

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Visual Explanation

June 6, 2006, 9:44 am

What Does Google Run On?

By Henry Woodbury

According to Instapundit Glenn Reynolds, Google runs on trust. Which makes him wonder about its prospects:

Lately, though, I’ve been wondering if Google has peaked. The reason is that, for lots of different groups of people, Google’s reputation as good guys has been stained. And I’m not sure what Google really has to bank on, besides a good reputation.

Reynolds points out that users can easily switch from Google to a competitor like Ask just by typing a different URL. However the barriers to change are not as low as he suggests — baseline users will use Google until it fails as a service while the webheads that are paying attention to Google’s PR problems may also have a Gmail account or run Google Desktop. And until advertisers see a change in traffic, they have no incentive to switch to a lesser-known service.

On the last point, Reynolds does link to a Buzzmachine post that suggests that Google’s marketing approach is not extensible. This doesn’t mean the current “views and click-through” model is at any risk, however.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

May 18, 2006, 8:32 am

The Universal Library and Who Owns It

By Henry Woodbury

The New York Times Magazine this week sports a long essay by Kevin Kelly about the possibilities of an electronic, universal library:

When fully digitized, [all the information in the world] could be compressed (at current technological rates) onto 50 petabyte hard disks. Today you need a building about the size of a small-town library to house 50 petabytes. With tomorrow’s technology, it will all fit onto your iPod. When that happens, the library of all libraries will ride in your purse or wallet — if it doesn’t plug directly into your brain with thin white cords.

As a “senior maverick” at Wired magazine, Kelly unfolds some very interesting and imaginative possibilities. After discussing the obvious advantages of linked bibliographies and cross-referencees, Kelly elaborates on “Books: the Liquid Version”:

At the same time, once digitized, books can be unraveled into single pages or be reduced further, into snippets of a page. These snippets will be remixed into reordered books and virtual bookshelves. Just as the music audience now juggles and reorders songs into new albums (or “playlists,” as they are called in iTunes), the universal library will encourage the creation of virtual “bookshelves” — a collection of texts, some as short as a paragraph, others as long as entire books, that form a library shelf’s worth of specialized information. And as with music playlists, once created, these “bookshelves” will be published and swapped in the public commons. Indeed, some authors will begin to write books to be read as snippets or to be remixed as pages.

At the moment, writes Kelly, the real obstacle facing the universal library isn’t technology, but copyright:

In the world of books, the indefinite extension of copyright has had a perverse effect. It has created a vast collection of works that have been abandoned by publishers, a continent of books left permanently in the dark. In most cases, the original publisher simply doesn’t find it profitable to keep these books in print. In other cases, the publishing company doesn’t know whether it even owns the work, since author contracts in the past were not as explicit as they are now. The size of this abandoned library is shocking: about 75 percent of all books in the world’s libraries are orphaned. Only about 15 percent of all books are in the public domain. A luckier 10 percent are still in print. The rest, the bulk of our universal library, is dark.

Google has an answer. But it’s being contested by publishers. Read the article to get the gory details.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Scholarly Publishing, Technology

January 13, 2006, 10:23 am

Is it Illegal to Annoy on the Internet?

By d/D

News.com correspondent Declan McCullagh has caused a stir among bloggers and free speech advocates with his report of a new U.S. law that makes it a crime to “annoy” other individuals via an anonymous email or Web post:

“Buried deep in the new law is Sec. 113, an innocuously titled bit called ‘Preventing Cyberstalking.’ It rewrites existing telephone harassment law to prohibit anyone from using the Internet ‘without disclosing his identity and with intent to annoy.’”

McCullagh worries that the law “could imperil much of Usenet” and be used against whistle blowers.

http://news.com.com/Create+an+e-annoyance%2C+go+to+jail/2010-1028_3-6022491.html?tag=newsmap

Looking to the legal experts, opinion is divided. Professor Orin Kerr asserts that the law is actually just an extension of a long-standing ban on telephone harassment, which takes constitutional speech protection as a given. What is affected is not speech laws, but the definition of “telecommunications device:”

“Now I suppose you can criticize Congress for being lazy. They haven’t rewritten the old 1934 statute in light of the modern First Amendment, and that has resulted in a criminal statute that looks much broader than it actually is.”

http://www.volokh.com/archives/archive_2006_01_08-2006_01_14.shtml#1136873535

However, First Amendment expert Eugene Volokh points out that extending old laws to new technologies can have unexpected consequences:

“How is this different from traditional telephone harassment law? The trouble is that the change extends traditional telephone harassment law from a basically one-to-one medium (phone calls) to include a one-to-many medium (Web sites). This is a big change.”

http://volokh.com/archives/archive_2006_01_08-2006_01_14.shtml#1136923654

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Technology

November 10, 2005, 10:54 am

The 500 Pound Web Application (and its Brother)

By d/D

Clearly challenged by the success of Google’s Web applications, Microsoft is repackaging many of the features of MSN and Office into a pair of new online services:

“Windows Live and Office Live will give users much of the functionality of the company’s two most profitable products but without requiring them to install and maintain the software on a computer hard drive.”

Potentially of great interest in this move is the capability of online applications to leverage collaborative use:

“Office Live Collaboration provides 22 small business applications along with tools to let distant users together edit documents in Word, Excel and other Microsoft formats through the Internet.”

http://www.computerworld.com/softwaretopics/software/story/0,10801,105868,00.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Implementation, Technology

October 10, 2005, 12:18 pm

Check Your Site in A9

By d/D

Amazon.com’s A9 search engine couples search hits with interesting meta data. Do a “Web” search and each site returned includes a “Site Info” icon that pops up information such as “traffic rank,” “sites that link here,” and “people who visit this page also visit…” The site stats come from Alexa Internet, a subsidiary of Amazon.

Other A9 searches pull up books or movies, Wikipedia articles, images, and many other types of data. There is a sense of serendipity that comes from trying these out. A search for images with “Dynamic Diagrams” as the keyword, for example, pulled up many images related to our work (some by us, some by others) from all over the Web.

http://a9.com/

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

October 10, 2005, 11:09 am

Enhancement Overload

By d/D

A new paper by Wharton marketing professor Robert J. Meyer, with Shenghui Zhao of Wharton and Jin Han of Singapore Management University, describes a “paradox of enhancement,” the way in which perceived consumer interest pushes technology products to become too complex to use:

“When people are considering buying next-generation products, they find the bells and whistles attractive and decide to make the purchase, but when they acquire the products, they find the complexity of the new features overwhelming and end up using only the products’ basic features.”

This a warning to technology companies caught up in the process of “nerds designing products for nerds.” Technologies that crossover to general consumer use may not be less sophisticated, but they must be simpler to use. That is the problem for interface and industrial designers to solve.

http://knowledge.wharton.upenn.edu/article/1292.cfm

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

August 11, 2005, 1:04 pm

Instant (Business) Messaging

By d/D

Getting beyond the personal, instant messaging (IM) tools are being adopted by organizations as a means of instant communication and distributed collaboration:

“Of U.S. companies that have deployed internal IM networks, 44 percent did so to boost intraoffice communications…. But the potential cost savings also are compelling — 33 percent said they offer IM to their employees to reduce long-distance phone charges.”

In our own experience, we have found instant messaging to be a convenient way for people to quickly touch base and set up more formal communications — such as a conference call — especially when key personnel are in different time zones or travelling.

http://news.com.com/Businesses+are+getting+the+instant+message/2100-1032_3-5770640.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

August 11, 2005, 12:42 pm

One Spammer Down…

By d/D

Joint lawsuits filed by Microsoft and New York State Attorney General Eliot Spitzer have resulted in a $7 million settlement from a business responsible for more than 38 spam million messages a year. Score at least this one for Microsoft:

“We have now proven that we can take one of the most profitable spammers in the world and separate him from his money.” Brad Smith, Microsoft chief counsel

http://news.bbc.co.uk/2/hi/business/4137352.stm

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Technology

August 11, 2005, 12:40 pm

Filtered Away

By d/D

The OpenNet Initiative, an organization dedicated to investigating and reporting on state efforts to control the Internet, has issued a disturbing report on China:

“China’s Internet filtering regime is the most sophisticated effort of its kind in the world…. It comprises multiple levels of legal regulation and technical control. It involves numerous state agencies and thousands of public and private personnel. It censors content transmitted through multiple methods, including Web pages, Web logs, on-line discussion forums, university bulletin board systems, and e-mail messages.”

http://www.opennetinitiative.net/studies/china/

Coinciding with this report are stories on concessions that major technology companies have made to the regime. Such companies include Microsoft, who agreed to Chinese requests that its weblog service, MSN Spaces, restrict words such as “democracy” and “Tibet”, and Yahoo, whose Chinese search engine filters out politically sensitive results.

http://www.guardian.co.uk/online/weblogs/story/0,14024,1506602,00.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

June 17, 2005, 1:14 pm

Wi-Fi Backlash?

By d/D

Some cafe owners are apparently questioning the economics of free Wi-Fi. Tables may be occupied for hours by patrons who make minimal purchases and inadvertantly change the vibe of the establishment:

“A cafe’s nature can be classified as ‘office,’ ‘social,’ or a hybrid, according to research by Sean Savage, who recently earned his master’s degree from the University of California, Berkeley…. In his work, Mr. Savage found that an office cafe discouraged conversation and was filled with people who came alone and were focused on their work. Social cafes have customers who arrive in groups. ‘If you come into a place like that and it’s a particularly busy time, you get dirty looks if you open a laptop and start zoning out,’ Mr. Savage said.”

http://www.nytimes.com/2005/06/13/technology/13wifi.html (free registration required)

On his Weblog, Savage offers his own commentary on the story:

“I see no evidence of a new trend: both of the San Francisco cafes in question have been experimenting with limited access for more than a year.”

http://www.cheesebikini.com/archives/001103.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

May 11, 2005, 1:16 pm

Manufacturers Follow their Users

By d/D

Democratizing Innovation, a new book by MIT Professor Eric von Hippel, explains how low-cost design tools let enthusiasts customize high-end products to their own specifications. Using the Internet, these “lead users” are able to popularize their ideas and create demand for them that filters back to the manufacturer:

“In a study at 3M, [Von Hippel] and several colleagues found that product ideas from lead users generated eight times the sales of ideas generated internally — $146 million versus $18 million a year — in part because lead users were more likely to come up with ideas for entire new product lines rather than minor improvements.”

http://www.dynamist.com/articles-speeches/nyt/innovation.html

Professor Von Hippel’s book is available for download from his Web site at:

http://web.mit.edu/evhippel/www/books.htm

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Business, Technology

April 7, 2005, 1:34 pm

The Case for a $100 Laptop

By d/D

In this interesting article on Nicholas Negroponte’s concept of a cheap, WiFi-based laptop, the “how” is almost as thought-provoking as the “why”:

“By using 1 gigabyte of solid-state memory to store software and data, ‘We’re thinking maybe you won’t need a hard disk drive,’ he says. And instead of expensive batteries, the $100 laptop could come with less-capable batteries and a hand crank for juicing them back up, like a radio on M*A*S*H.”

http://www.usatoday.com/money/industries/technology/maney/2005-02-08-maney_x.htm

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Implementation, Technology

March 11, 2005, 10:03 am

Bug Free Health Care?

By d/D

Studies in the Journal of the American Medical Association report that treatment tracking software may be problematic for patients. This summary in the New York Times only vaguely distinguishes between data standards, software development, and usability, but clearly some of the reported problems relate to information architecture:

“To find a single patient’s medications, the researchers found, a doctor might have to browse through up to 20 screens of information….Among the potential causes of errors they listed were patient names’ being grouped together confusingly in tiny print, drug dosages that seem arbitrary and computer crashes.”

http://www.nytimes.com/2005/03/09/technology/09compute.html?incamp=article_popular_5 (free registration required)

The first of the JAMA studies, “Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors,” is offered for free on the association’s Web site:

http://jama.ama-assn.org/cgi/content/full/293/10/1197 (guest registration required)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Information Architecture, Technology

January 12, 2005, 2:29 pm

For Academics: Blog or Perish?

By d/D

Professor Tyler Cowen of George Mason University addresses the question: “how [do] blogging and academic scholarship go together? In specific, he wonders what might have inspired Professors Richard Posner and Gary Becker to enter the fray:

“I’ve heard that if Posner were a law school, his citation index would put him in or close to the top ten. And Becker just gave up his Business Week column a few months ago. He is also the most widely cited living economist, not to mention that Nobel Prize. So why are they blogging?”

http://www.marginalrevolution.com/marginalrevolution/2004/11/the_scholarly_c.html

The Becker-Posner blog is at:

http://www.becker-posner-blog.com/

Cowen credits Northwestern University Professor Eszter Hargittai with raising the question; Hargittai’s writings include some thought-provoking ideas and many links to other opinions on the subject:

“There are posts on blogs that are certainly much more original and careful in their arguments (and more clearly written) than many articles that get published in academic journals. I think people’s reluctance to consider blog writing as comparable to journal publishing comes from thinking about journals in a somewhat romanticized and unrealistic manner.”

http://www.crookedtimber.org/archives/002884.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Scholarly Publishing, Technology

January 12, 2005, 2:27 pm

Collective Editing via Wiki

By d/D

Five years ago, Harvard Law Professor Lawrence Lessig published Code and Other Laws of Cyberspace, a well-reviewed book that argues that the Internet’s hardware and software protocols determine how the medium is controlled by vested interests.

To update the book, Lessig has decided to post its contents to a Wiki, a platform for collaborative editing by everyday users (most famously in the Wikipedia encyclopedia). Lessig will then edit the Wiki-based updates to produce the final new edition:

“My aim is not to write a new book; my aim is to correct and update the existing book. But I’m eager for advice and expert direction…. No one can know whether this will work. But if if does, it could be very interesting.”

http://www.lessig.org/blog/archives/002358.shtml

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Books and Articles, Scholarly Publishing, Technology

December 8, 2004, 2:48 pm

The Online News… from 1836

By d/D

This past month, National Endowment for the Humanities Chairman Bruce Cole announced a project with the U.S. Library of Congress to place 30 millions pages of old newspapers online:

“Now, with this new digital program, you will see the papers just as they were–you will be able to search the actual page. The technique is OCR–optical character recognition. In fact, there is already a model up on the Library of Congress site. It’s got the Stars and Stripes from World War One. It shows you the whole page and there’s a zoom device so you can focus in on a single story and be able to read it. It’s key word searchable. It’s a quantum leap from trying to read microfilm.”

The archive will start in 1836, the point at which the OCR technology can read typical newspaper type, and end in 1922, after which copyright issues come into play. However, all newspapers published in the United States, from 1690 to the present, will be included in an associated online bibliography.

http://www.neh.gov/whoweare/speeches/11162004.html

To see how the technology works, you can go to the Library of Congress’ Stars and Stripes archive and select any issue:

http://memory.loc.gov/ammem/sgphtml/sashtml/sashome.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Scholarly Publishing, Technology

December 8, 2004, 2:44 pm

Google Rules for Scholarly Content

By d/D

Google’s new “Scholar Google” (http://scholar.google.com/) is a public search engine specifically targeted to scholarly information. Of interest are the implications of Google’s typically terse recommendations for submitting and accessing different kinds of content. Regarding abstracts, for example, Google requires open access:

“Regardless of the source, you should be able to see an abstract for any article, with the exception of those that are offline and referenced in citations only. Please let us know if you don’t see even an abstract.”

These are issues we’ve encountered many times in our work for university publishers and professional associations. Google’s recommendations are likely to start turning good practices into industry standards.

http://scholar.google.com/scholar/about.html

Google Scholar is generating a lot of interest online; here are two reports:

http://searchenginewatch.com/searchday/article.php/3437471

http://www.resourceshelf.com/2004/11/wow-its-google-scholar.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Information Architecture, Scholarly Publishing, Technology

November 10, 2004, 3:14 pm

Complexity and its Costs

By d/D

The Economist surveys the state of the IT world in terms of one idea: complexity. The thesis is that complexity slows the adaptation and spread of new technologies, undermines the usability of existing technologies, and, most bluntly, increases costs:

“The Standish Group, a research outfit that tracks corporate IT purchases, has found that 66% of all IT projects either fail outright or take much longer to install than expected because of their complexity. Among very big IT projects — those costing over $10m apiece — 98% fall short.”

The survey is mostly a catalog of known debates such as the virtues of Linux vs. Windows or voice-over-Internet vs. “plain old telephone service,” but it does pull together many different issues into a comprehensive pattern.

http://www.economist.com/surveys/displaystory.cfm?story_id=3307363

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

November 10, 2004, 3:12 pm

Shortcutting the Semantic Web

By d/D

The W3C’s Semantic Web project is an attempt to define the attributes necessary to make Web data usable by database applications as well as people. Now, Sony Computer Science Laboratory is promoting its “emergent semantics” technology as an alternative. Instead of a markup-level tagging system, Sony’s system looks at how content is accessed and shared:

“In emergent semantics, a user’s agent bootstraps the information and categorization of content, such as the classification of music in genres. Through interactions among agents trading ‘favorite’ songs, genres emerge that are common to sets of users. Such emergent semantics as self-organizing genres are automatically tagged onto the content as an extra layer of information rather than depending on people to do the tagging”

http://www.eetimes.com/article/showArticle.jhtml?articleID=51201131

The W3C’s Semantic Web home page is at:

http://www.w3c.org/2001/sw/

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Implementation, Technology

October 13, 2004, 9:57 am

The Afterlife of Digital Information

By d/D

A recent story on National Public Radio describes a Library of Congress’ initiative to preserve digital information that can propagate, change, and disappear without a trace. As of December 2000, the Internet, just one digital medium, had more than 4 Billion Web pages whose average life was 44 days.

Speaking to NPR’s Robert Siegel, Laura Campbell of the Library of Congress compared the repository to government photography archives from World War II:

“We can’t collect everything but we can certainly take a snapshot in time to tell the story about what local life was like…. We will have people who go through and sample what’s on the Web so that we can create that same kind of archive.”

http://www.npr.org/rundowns/segment.php?wfId=4062797

The Digital Preservation Program web site describes the breadth of the the undertaking. Projects range from the technical development of Web archiving tools to the funding of specific archives to the defining of metadata standards:

http://www.digitalpreservation.gov/about/

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Scholarly Publishing, Technology

September 16, 2004, 3:34 pm

RSS Stalled?

By d/D

Web Designer Andrew Boardman has some interesting comments about whether the spread of the RSS syndication standard has stalled and what could get it going again. His kicker:

“Web browsers need to find a way to integrate RSS into their interfaces. Apple’s Safari is slated to do this, but until Microsoft works out a way to do it, RSS will fail.”

http://www.deckchairs.net/blogs/main/archives/000651.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

August 12, 2004, 3:37 pm

Information Density and the David Rumsey Map Collection

By d/D

A common problem of digital media is its relatively low information density compared to print publications. Maps can be an especially rich way to present information, but online versions are often reduced to multiple bitmaps of equal simplicity (consider Yahoo Maps or Mapquest, for example).

The David Rumsey Map Collection manages its library of high resolution scans with powerful compression software and the use of the Insight® browser, an Internet application designed specifically for the browsing of images as oppposed to text.
While one might consider alternative presentation methods that fit more seamlessly with standard browsers, the Insight® technology offers an example of a fully customized approach that is worthy of examination.

http://davidrumsey.com/index.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Visual Explanation

August 12, 2004, 3:35 pm

Extracting Data from User Forums

By d/D

Edmunds.com’s user forums contain more than 2.5 million messages and 100,000 car reviews. To extract meaning from this wealth of commentary, questions, and answers, Edmunds.com is analyzing the data with Attensity Corporation’s PowerDrill software. PowerDrill is notable for its use of sentence diagramming to identify the actors, actions and objects in unstructured text:

“[In beta tests] Edmunds.com was able to analyze trend information from conversations on the forums, including shopping and dealer behavior, re-occurring issues, and concerns which can also be used to predict future behavior.”

Tools such as PowerDrill that turn free form text into relational data may cause Web developers to take a new look at how they utilize forums, feedback forms, Web logs and free-form content spaces.

http://www.infoworld.com/article/04/08/06/HNedmunds_1.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Information Architecture, Technology

August 11, 2004, 3:41 pm

“Do not touch the blue E!”

By d/D

For the first time since beating out Netscape, Internet Explorer is losing marketshare:

“No one is forecasting the demise of Internet Explorer, but the most recent data from WebSideStory show that of visits to Web sites the firm tracks, the number made using Explorer declined 1.3 percent from early June to mid-July. At the same time, use of other browsers – Firefox and Opera in particular – rose.”

The key impetus for ordinary users to seek out a different browser appears to be dissatisfaction with pop-up advertisements. Download speed and security concerns also play a role.

http://www.nytimes.com/2004/08/12/technology/circuits/12brow.html (free registration required)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Current Events, Technology

July 9, 2004, 4:42 am

Interaction Design for the Internet

By d/D

According to designer Philip van Allen, the Internet could and should be far more interactive. Users need new tools to act as producers of meaning, rather than a passive consumers of information. The goal is what van Allen calls “productive interaction”:

“In contrast to traditional media, productive interaction’s strength is facilitating and provoking the dialog. It enables juxtaposition, and supports the remixing of the actual content.

“Productive interaction gives the reader a pair of scissors and permission to cut up the book.”

Beyond an examination of data structures, increased collaboration between designers and software programmers, and the rethinking of authoring systems, van Allen calls for a broad commitment to experimentation:

“Interaction designers should devote part of their practice to breaking the common constraints; designing for very large displays, moving away from the ‘mouse crouch,’ incorporating tangible interfaces, and experimenting with new delivery systems.”

http://ojr.org/ojr/technology/1088538463.php

A detailed research paper and demo are available from van Allen’s Web site:

http://productiveinteraction.com/

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology, Web Interface Design

May 8, 2004, 8:26 am

Where to Go Wireless

By d/D

If you’re traveling — to Brighton Beach, say, or a San Francisco Giants baseball game — JiWire has a guide to wireless hot spots around the world:

http://www.jiwire.com/

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology

April 8, 2004, 8:36 am

Rethinking Encyclopedias

By d/D

Over the past decade the expansion of electronic alternatives has dramatically undermined the encyclopedia market. In response, publishers are looking for ways to obtain more value from their content:

“The shrunken reference powers that survived the shakeout — namely Britannica, World Book, and Grolier, the maker of Encyclopedia Americana now owned by Scholastic Library Publishing — have now retooled to focus more on online products.

“Voluminous sets are still printed, but mostly only for institutions. The encyclopedia companies are also targeting consumers with more concise and less expensive reference books.”

Online, the possibilities are exciting. The same data that drives an encyclopedia Web site could be queried by many different kinds of customized informational tools. The success of such tools, however, depends upon equally customized information architectures, each tailored to help a specific audience extract meaningful information from a specific body of content.

http://www.cnn.com/2004/TECH/internet/03/11/disappearing.encyclopedia.ap/

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Scholarly Publishing, Technology

March 9, 2004, 9:21 am

Google Under Fire

By d/D

Can a business model based on an algorithm succeed? As Yahoo ends its partnership with Google (see http://news.com.com/2100-1024_3-5160710.html ), the popular search engine faces aggressive new competition:

“‘When Google first launched, they had some new tricks that nobody else had thought about before,’ says Doug Cutting, an independent software consultant…. But plenty of other search engines now offer intriguing alternatives to Google’s techniques…. “For example, there’s Teoma, which ranks results according to their standing among recognized authorities on a topic, and Australian startup Mooter, which studies the behavior of users to better intuit exactly what they’re looking for. And then there’s the gorilla from Redmond: Microsoft is turning to search as one of its next big business opportunities.”

http://www.technologyreview.com/articles/roush0304.asp?p=0 (free registration required)

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Business, Technology

January 8, 2004, 9:41 am

No More Unbinding and Rebinding

By d/D

In its understated way, Micropaleontology Press, feature of a recent NPR story (http://www.npr.org/features/feature.php?wfId=1572223), points out another advantage of electronic media:

“In 2003, the Foraminifera Catalogue reached 106 looseleaf volumes containing more than 87,000 pages … Since all the printed volumes must be unbound and rebound each year for the alphabetic insertion of 500 to 600 additional pages, the internet edition has quickly become popular.”

See the very bottom of http://micropress.org/history.html

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Scholarly Publishing, Technology

December 8, 2003, 9:47 am

Dead Links and Scholarly Research

By d/D

When footnotes are URLS, footnotes disappear. Faster than you may think:

“In research described in the journal Science last month, the team looked at footnotes from scientific articles in three major journals — the New England Journal of Medicine, Science and Nature — at three months, 15 months and 27 months after publication. The prevalence of inactive Internet references grew during those intervals from 3.8 percent to 10 percent to 13 percent.”

http://www.washingtonpost.com/ac2/wp-dyn/A8730-2003Nov23

Mentioned in the article is the digital object identifier system known as DOI. This is a system we’ve seen used effectively in our work for scientific publishers. The DOI web site is http://www.doi.org/.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Scholarly Publishing, Technology

December 8, 2003, 9:45 am

PowerPoint: Love it or Loathe it?

By d/D

David Byrne has learned to love it:

“Although I began by making fun of the medium, I soon realized I could actually create things that were beautiful. I could bend the program to my own whim and use it as an artistic agent.”

http://www.wired.com/wired/archive/11.09/ppt1.html

Edward Tufte believes it is an evil program:

“Power corrupts, PowerPoint corrupts absolutely.”

http://www.wired.com/wired/archive/11.09/ppt2.html

Our take? By importing images and objects (including Flash movies) we can make our presentations as customized as we like. Even Tufte admits that PowerPoint “is a competent slide manager.” Furthermore, we have found PowerPoint quite useful for making wireframes (page schematics). It is sufficiently flexible, accommodates notes, and everyone has it (unlike Visio, for example), so clients can circulate documents easily within their organizations.

  • Facebook
  • Twitter
  • del.icio.us
  • StumbleUpon

Comments (0) | Filed under: Technology