Information Design Watch
September 23, 2011, 3:00 pm
By Henry Woodbury
Even on mobile devices a web app can beat out a platform-specific app. That’s the case for The Financial Times (FT). FT spokesman Rob Grimshaw reports that their HTML 5 web app draws more readers for more page views than their now-discontinued Apple store app.
This is a nice success story for web developers, but there’s more going on than traffic:
…Apple takes a 30 percent cut of subscription revenue from users who sign up for apps in the store.
More problematic is that Apple wants to control subscriber data — valuable demographic information used by magazines and newspapers to sell advertising — from people who sign up for the app in the store.
For subscription-based publishers such as FT this is not a supportable position. One has to wonder if other successful subscription-based sites are equally dissatisfied.
Of course, what makes the FT story unique is that its web app replaced its Apple store app. For many organizations the platform app will never get built, not when a comprehensive web development effort can leverage some common UI and code to target both desktop and mobile users.
“App stores are actually quite strange environments,” Grimshaw said. “They are cut off from most of the Web ecosystem.”
Update: In regards to my last point before the last quote, Jason Grigsby’s Cloud Four critique of responsive web design is required reading. The mobile and desktop environments each deserve their own optimization.
(via a Tizra Facebook post)
January 14, 2011, 2:17 pm
By Henry Woodbury
One significant target for our Remodeling Dynamic Diagrams project is the redesign of this blog. The interface designs are close to final now and have us thinking about how we will import current content. Unlike our primary web site we will not recreate content or images for Information Design Watch. Instead we will create a WordPress theme and apply it to the existing posts.
The issue is this. Our new blog design has a 640 pixel width content column. The current design has a 690 pixel width content column. Any image or object in our archives sized to the maximum setting of 690 pixels wide will not fit the new format.
We are approaching this issue in two different ways.
First, about month ago, we set 640 pixels as the maximum image size in the current theme. This means that recent images are already optimized to work within the new design.
Second, the new design features a wide content margin. Using a negative margin CSS technique, images up to 690 pixels can extend into this margin without obscuring sidebar links or breaking the column.
There is a third solution. We can manually edit each post with a 690 pixel width image and replace it. That one awaits a design intern.
December 3, 2010, 10:18 am
By Henry Woodbury
In Tim’s last post on Remodeling Dynamic Diagrams he mentioned our decision to use web fonts. By maintaining font files on our server and referencing them via @font-face calls in our CSS files, we can bring to our web presence the Meta typeface we have long used in our diagrams, presentations, print collateral and Flash animations.
This demo page shows the Meta Web version we have purchased for the site redesign. Internally we have tested it on Internet Explorer 6, 7, and 8, and current versions of Firefox, Safari, and Google Chrome (such incremental browser testing is part of our process). It also works on the iPhone’s Safari browser.
If the fonts on the demo page don’t resemble the image below on your browser, let us know!
UPDATE (December 9, 2010): As Andy mentions in the comments, the lower-case y in Meta Web Medium renders with a flaw. This appears on all Windows-based browsers. We’ve reprocessed the fonts and uploaded a new demo.
July 27, 2010, 3:05 pm
By Henry Woodbury
Say you’re a software engineer trying to explain asynchronous processing to people with a general interest in software. You might use Starbucks as an example. Over to you, Gregor Hohpe:
Starbucks, like most other businesses is primarily interested in maximizing throughput of orders. More orders equals more revenue. As a result they use asynchronous processing. When you place your order the cashier marks a coffee cup with your order and places it into the queue. The queue is quite literally a queue of coffee cups lined up on top of the espresso machine. This queue decouples cashier and barista and allows the cashier to keep taking orders even if the barista is backed up for a moment. It allows them to deploy multiple baristas in a Competing Consumer scenario if the store gets busy.
This is a quirky article that introduces a number of programming concepts in an accessible and entertaining way. Hohpe throws in the occasional deep dive — as with the “Competing Consumer” link in the quote — but even there the analogy helps you guess where such a link might take you.
Analogy speaks to shared experience. It provides a way — one way — to turn abstract concepts into visual explanation. I can almost see the coffee cups lined up in front of me.
January 12, 2010, 11:13 am
By Kirsten Robinson
Historic New England has launched a Centennial microsite to celebrate their 100th year of preserving New England’s history and to highlight centennial projects that they are creating in conjunction with community partners throughout the New England states. Key site features include an events calendar, photo galleries and slide shows, and video oral histories.
Historic New England selected Dynamic Diagrams to create the user experience for the site (research, information architecture, visual design, and XHTML and CSS coding). We worked with our development partners to implement a Plone content management system (CMS) that provides Historic New England — for the first time — with complete control to create their own pages.
The Centennial site is also a preview of things to come. Watch this space for a future announcement of Historic New England’s redesigned and enhanced main web site.
December 18, 2009, 11:11 am
By Lisa Agustin
Fresh from Google Labs: Google Browser Size, a nifty visualization tool for checking how much of a web page sits “above the fold,” i.e., what’s visible in Google without scrolling. Just type in any URL to see how the site looks. Color contours show different window sizes and the percentage of users that have this size or larger. (Presumably these percentages are based on Google’s own statistics.) For instance, in the example above, the “donate now” button falls within the 80% contour, meaning that 20% of users cannot see this button when they first visit the page. If getting donations is a priority of the site, the web design team now knows they ought to position the button higher on the page.
The tool works as an overlay, allowing you to interact normally with the page you’re examining. Thus you can easily review other pages on the site as well. This is great for sites that are about to be redesigned, or ones that you’re just curious about. I was also happy to discover that this tool also works for designs that are still in development–I was able to view a .png on a project site, which gave me instant feedback on what will be visible on page load. Nice work, Google.
January 8, 2009, 2:56 pm
By Henry Woodbury
The upcoming CSS3 Specification looks to codify some of today’s favorite interface design tricks, including rounded corners, drop shadows, alpha transparency, and custom fonts. Many of these features can be accessed already using the probable CSS3 style or a browser-targeted version of the same. For example, rounded corners has three test declarations:
-moz-border-radius (for Mozilla-based browsers such as Firefox)
-webkit-border-radius (for Webkit-based browsers such as Safari)
border-radius (the probable CSS style)
Here is a semi-transparent white box with rounded corners and a drop shadow. Two circles are overlayed to show transparency effects.
None of these effects show on the current version of MSIE 7 — so let that be your control. Some may not show on Firefox until the release of Firefox 3.1, but all work on Safari using the Webkit syntax.
April 1, 2008, 3:23 pm
By Henry Woodbury
Joel Spolsky offers a look ahead at Microsoft Internet Explorer 8. What he foresees is a web developer flamewar.
Headed by developer Dean Hachamovitch, the MSIE 8 team has decided to move its default mode away from MSIE 7 compatibility and closer to web standards. Spolsky offers a long quote from Hachamovitch’s announcement of this decision, but it boils down to this:
We’ve decided that IE8 will, by default, interpret web content in the most standards compliant way it can.
This means that some HTML pages coded to take advantage of some of MSIE 7′s quirks will break in MSIE 8.
This is a problem? It shouldn’t be.
Barring the introduction of any new quirks (say a new way to misinterpret the box model), there’s no reason any Web site HTML and CSS should break in MSIE 8. If a web site has been tested against MSIE 6, MSIE 7, Firefox, and Safari (as are all of our public-facing projects), and if its developers have used a robust HTML structure and the subset of mutually-supported CSS styles (rather than browser-sniffing to write specialty CSS), then the odds of that site rendering incorrectly in MSIE 8 should be very small.
November 27, 2007, 12:05 pm
By Lisa Agustin
The days of the web developer’s technical spec are long gone, writes columnist Richard Banfield: “In a world of intensely visual design, we have to ask why we still need to write massive documents to describe web products that real people will use.” According to Banfield, there was a time when it made sense to document everything before starting any software development, and that this way of doing things was largely a result of limited technology and lower design costs. These days, developing a web site or application demands a more agile approach–one in which visual tools play a key role:
“Once the priority of a project is established, the team should immediately move toward visualizing that idea. This can take many forms, but we have found that whiteboards and large pieces of paper work wonders to get everyone on the same page. Nothing slows down the creative process like a 60-page document, complete with spreadsheets and appendices.”
This has been our experience as well. While some engagements do require some type of written narrative — especially in cases where there needs to be a more detailed explanation of the application for a broader group outside of the development team — we’ve seen immense value in translating requirements into a visual form during all phases of a project. I would take Banfield’s comments a step further by suggesting that visuals are not just helpful tools, but can often replace specification documents as deliverables. Diagrams (for expressing high-level user experience), process flows (for explaining complex transactions), and heavily annotated wireframes (for describing functionality at the page-level) are “closer to reality” than a Word document that describes them. This makes the idea behind an application easier to understand and discuss, leading a group to consensus about direction much more quickly.
July 20, 2007, 10:37 am
By Lisa Agustin
Rich Internet Applications (RIAs) enable a user experience that’s more responsive and sophisticated than traditional HTML. But does crafting the RIA experience differ that much from architecting a traditional web site? Yes and no, says Adam Polansky in the latest ASIS&t Bulletin. Polansky, an information architect for an online travel company, was tasked with producing a trip planning application that had originally taken shape as an exciting proof-of-concept Flash demo, but which had not been scrutinized in terms of scalability, usability, or actual user needs.
Before moving forward, Polansky took a few steps back by employing traditional IA exercises such as wireframing (adapted to a more interactive experience) and usability testing to validate the direction and identify the holes. Besides pointing out the similarities and differences between building web sites and RIAs, he offers a good shortlist of pitfalls to avoid, including the potential for increased revision cycles and building interaction at the expense of content. I would tend to agree with him on both fronts. In our practice, we’ve found that constructing process flows and annotated wireframes are key to keeping everyone on the same page about the intended user experience and the possible trade-offs between vision and feasibility. These activities ease (if not eliminate) any worry of creating interaction for its own sake.
June 19, 2007, 10:56 am
By Lisa Agustin
Earlier this month, Fastcompany.com plugged the agile development approach that was used to redesign its home page. The approach in a nutshell, according to blogger Ed Sussman: “Vision, release, test, iterate. Repeat. Quickly.” Speaking metaphorically, think of design and development as a washing machine, not a waterfall. The organization initially planned to release the new design as part of a larger effort that encompassed new features and functionality. But in the end, they decided against it:
What if we had waited to get it all just right before we released FC Expert Bloggers? We’d still be in the dugout. We’d have been guessing instead of seeing what the market actually thinks. In an effort to make our product perfect, we probably would have been forced to spend loads of money fixing problems that might not have mattered to our readers.
The agile approach is one that certainly has its benefits — it’s flexible and means users get to see the latest features sooner, without waiting for an annual update. But in order to be successful, an agile approach still has to start with stakeholder and user requirements that are validated through an information architecture, design, and development process. Only then can an organization be sure its site’s “killer widgets” are truly meeting the needs of its audience.
February 26, 2007, 9:35 am
By Lisa Agustin
Reading a string of comments on a blog is not the most stimulating user experience. Moreover, if a blog post is riveting enough to start an online conversation via comments, following the exchanges between participants may require closer reading to see who said what. Enter the Identicon. Programmer Don Park developed the Identicon as a way of enhancing the commenter’s identity by using a privacy protecting derivative of each commenter’s IP address to build a 9-block image to identify the writer. Referred to in its debut as “IP-ID,” the Identicon is written in Java and based on the first four bytes of SHA-1 (Secure Hash Algorithm). The Identicon’s visualization consists of a small quilt of 9 blocks that uses 3 types of patches, out of 16 available, in 9 positions. To try this yourself, visit Park’s blog and scroll down to the comment form, which will display your current Identicon. Mine at the time of this writing:
How it works: the Identicon code selects 3 patches: one for center position, one for 4 sides, and one for 4 corners. There are additional details in the code for determining positioning, rotation, color, and inversion of the blocks.
For users with dynamic IP addresses, their Identicons will change over time. However, according to Park, it doesn’t appear to change often enough to affect identification beyond a “typical comment activity cluster” (presumably a single session during which a comment might be posted). Park adds:
I originally came up with this idea to be used as an easy means of visually distinguishing multiple units of information, anything that can be reduced to bits. It’s not just IPs but also people, places, and things. IMHO, too much of the web what we read are textual or numeric information which are not easy to distinguish at a glance when they are jumbled up together.
Besides the intended purpose of identifying individual users among a sea of many (e.g., wiki authors, customer tracking in CRM tools, etc.), there may be other uses as well, such as identification of individual computers within a large network. Plus the Identicon seems to be gaining in popularity: a PHP version is now available, as well as one that works for WordPress.
February 8, 2007, 11:28 am
By Henry Woodbury
Despite some dead links and the parody-inviting title, it’s a good resource. Most of the designers to whom the article links offer well-designed code; many provide thoughtful explanations of why you would even bother with it.
Herein lies the value of the list. Most of Smashing‘s life-saving techniques are good only for spot use, but spending a little time with the code and commentary can give a web designer a lot of insight into how different CSS attributes interact.
January 23, 2007, 9:04 pm
By Lisa Agustin
Without fail, the start of the new year gets people thinking about What Will Be Big This Year. The latest issue of Digital Web Magazine features an interview with Doug Bowman, a Visual Design Lead with Google, in which DWM asked which apps from 2006 are most significant and what that means for 2007. Aside from the expected endorsements of Google’s Calendar and Spreadsheets, Bowman had some interesting comments touching upon the themes of selective content sharing (e.g., Six Apart’s Vox) and more consolidation (e.g., Yahoo! Mail).
But what piqued my interest the most were Bowman’s comments regarding “gesture user interfaces,” or UIs that are driven by physical movements of the user. This is not a new thing, of course–dragging and dropping is something that most users accept (maybe even expect) with the latest applications. But recent offerings like the Nintendo Wii and the Reactrix interactive advertising display are giving us glimpses into what user experience may hold for the future. (Okay, so maybe the holographic screen in that Tom Cruise movie wasn’t completely off the mark?) What I find most interesting about gesture UIs is not so much what the final user experience will be for gesture-driven apps, but how would you architect and then document the desired experience? What kinds of description languages will need to be developed to describe the experience programmatically? What kinds of new user input paradigms will emerge moving forward? Stay tuned.
August 3, 2006, 10:23 am
By Henry Woodbury
Kevin Hale at Particletree has a pair of articles on prototyping and wireframing AJAX applications. These are excellent primers for web designers interested in working with AJAX developers. I learned some CSS details that will help me out. However, I think Hale misses one key point of prototyping and wireframing almost entirely.
That point is risk management. When a design is still under review, when process steps are still being determined, box wireframes and bitmap design comps allow an information architect and designer to develop ideas quickly and revise or even abandon them with minimal pain. At Dynamic Diagrams we prefer to avoid coding designs until we have agreement on all the major elements of the interface. If we can do at least some usability testing with bitmaps, that’s even better.
But what of the demands of the interactive AJAX-driven interfaces that Hale describes? One risk-free way to show AJAX interactivity is to present work you’ve already done, perhaps from your own development library. Once stakeholders agree about the types of interactivity they want, an actual interface can almost always be modeled as a sequence of static wireframes, convertible to static bitmaps: Here are the elements at point x; Here they are at point y. More useful than a working prototype at this stage may be a workflow diagram that shows an entire sequence of steps in one view:
This diagram shows the interaction of different user types with a help ticket, describes when that object changes status (open, under review, closed), and identifies which pages in the process have multiple functions — crucial information for an AJAX developer to understand.
All this said, there (usually) comes a time when a project advances beyond wireframes and design comps to coding and development. At this point, for the web designers working with AJAX developers, Hale’s advice makes a lot of sense.
July 21, 2006, 4:04 pm
By Henry Woodbury
It’s like the early days of Web design, but more so. This Design Interact article describes how Yahoo planned and delivered its mobile device site for the 2006 World Cup. The goal was to make a site that could work on as many browser-enabled phones as possible. The problem was the baffling idiosyncrasies of those devices:
“The Web browsers on phones vary from basic to super basic,” explains Keith Saft, senior interaction designer at Yahoo! Mobile. “They also have these eccentric bits of HTML and CSS that they don‘t support, and there aren‘t really any standards or consistency across phones.“
As they catalogued the technical limitations of mobile browsers, the Yahoo team created a design strategy that prioritized usability:
With production also came usability testing. And here, surprisingly enough, the team did not try to achieve perfect layout and content consistency on every phone. Instead, it wanted to make sure that users understood something it called “design intent.“
Do users navigate efficiently through the site? Do they understand how items are grouped on a screen? Can they retrieve the information they want? “Design intent” is design by information architecture.
July 11, 2006, 2:24 pm
By Lisa Agustin
Considered primarily an approach to programming, the “open source” method is now being applied to the unlikely area of font design, specifically for Linux.
Open source type design is not a completely new idea. In 2003, a font family called Vera was developed for open-source use. Under the license terms, anyone was permitted to make new fonts based on Vera, as long as the derivatives were given a different name. The latest effort in this movement is tied to DejaVu, a Vera derivative that has sparked the interest of different Linux players:
DejaVu has caught on widely enough for it to be the default font for Dapper Drake, the latest update to Ubuntu Linux. It may also become the default font for Red Hat’s Fedora version of Linux.
“DejaVu, from purely a user perspective, seems to be the one that has the momentum and benefits behind it,” said Rahul Sundaram, one of nine board members for the Fedora Project, which governs the Linux version.
Taking a collaborative approach to type design has been particularly helpful in addressing practical concerns for making fonts, such as the creation of special characters or glyphs for other languages:
In the software world, creating a new offshoot is called “forking.” The freedom to do so is one hallmark of an open-source project. Several designers launched their own Vera forks… The designers had initially created limited extensions to include Western languages such as Welsh or Catalan, then later took on larger and more ambitious extensions, such as Greek and Cyrillic.
The renewed interest in improving this aspect of Linux goes beyond improving typeface presentation for its own sake–it demonstrates that elements of the user interface are just as important as performance factors in offering Linux as an alternative to the Windows operating system.
May 15, 2006, 11:29 am
We recently attended the Success by Design Conference, an annual event sponsored by The Center for Design and Business in Providence, Rhode Island, USA (http://www.centerdesignbusiness.org/conf.html). The Center’s mission is to explore the intersection of design principles and business intelligence. This year’s conference focused on innovation in product design and service delivery. The following is a recap of our team notes and conclusions from several key sessions.
“Service Innovation: Design’s New Frontier” by Jeneanne Rae, Co-founder, Peer Insight, LLC
A nationally recognized thought leader for innovation management and design strategy, Jeneanne Rae of Peer Insight, LLC, (http://www.peerinsight.com/) helps organizations recognize and take advantage of critical business opportunities. Services currently represent 80 percent of the U.S. economy and is growing. As this market expands, companies need to think creatively about how to get a competitive edge. According to Rae, infusing service delivery with well-established design skills can lead to innovations in the customer experience. The designer’s skill set is a natural fit for improving service delivery because it encompasses the following:
Empathy. Designing a service experience requires understanding users — not just their goals, but also their emotional, social, and cultural needs.
Broader Thinking. Designers think about the possibilities: What if? What could be?
Visualizing and Prototyping. Designers are used to developing typical scenarios to better understand how and why a product might be used. Some service scenarios can be studied with a physical model (for example, a passenger train car, built to scale); others can benefit from getting user response to verbal, visual, or virtual scenarios.
Iterative Testing. Designers know that good products only become better by repeated testing and iterative improvement.
Integrated Solutions. Design takes into account the perspectives of both users and key stakeholders. Achieving a balance is key.
Rae acknowledged that innovation in service delivery is not without its challenges:
There is no product portfolio. A company that innovates a service will find it challenging to describe the offering in a way that has immediate appeal and can, at a glance, stand apart from the competition. This is where visualization can make a difference.
Services are fuzzy. Unlike products that can use a platform strategy and established pricing model, services require companies to think more conceptually about an offering that is intangible and perishable (can’t be inventoried).
It’s hard to go it alone. Innovating services delivery with design approaches requires some expert help; service companies need to recognize this and form professional partnerships as appropriate.
We found Rae’s talk to be both observant and insightful. Design isn’t (only) about making objects more attractive or fun to use. It’s about understanding what goes into the ideal customer experience, and working to achieve that through research, modeling and testing.
“Designing the Xbox 360 Experience” by Jonathan Hayes, Xbox Design Director, Microsoft
Jonathan Hayes was responsible for leading the development of the Xbox 360 (http://www.xbox.com/), the Microsoft entertainment system known not only for its powerful performance, but also its beautiful presentation. Microsoft’s goal of expanding the audience for Xbox beyond core gamers to a global market demanded a unique collaboration of artists, engineers and researchers. According to Hayes, “technology needs poetry.” But balancing the tension between technology and design required some ground rules:
Structure the process. Because the team was very large and distributed worldwide, establishing a process, milestones, and master timeline was essential to keeping the project on track. Groups worked on specific activities independently, but also had a clear idea of when to converge with the rest of the team to share results and feedback.
Structure the solution space. The subjective nature of design can lead to excessive iterations, sometimes without an end in sight (“The right design? I’ll know it when I see it.”) The Xbox team managed this risk by creating a visual framework for articulating possible solutions: a quadrant system that indexed “Mild to Wild” on one axis vs. “Organic to Architectural” on the other. Stakeholders and users were told to frame their feedback within the context of these terms. This allowed very different prototype designs to be evaluated at a thematic level with specifics deferred for later.
Predefine inclusive design values. Before beginning the design process, the team established the requirements that the new product had to meet. By doing this, the team eliminated the risk of personal preference steering the design solution.
Look at work in context and in person. Throughout the process, the team validated the proposed solution by testing the Xbox with potential users and eliciting their feedback.
Hayes’ session demonstrated that successful design solutions aren’t crafted in a vacuum and often require the input of other talented individuals such as researchers and technologists. To make such a collaboration work, there needs to be agreement on the criteria for success and how to get there.
“Innovate/Resonate: Tools for Change” by Stuart Karten, Principal, Stuart Karten Design
Stuart Karten Design (http://www.kartendesign.com/) is an industrial design consultancy that creates products using a user-centric approach. During his session, Karten outlined a specific process, “mode mapping,” that visually represents observational and ethnographic data. The mode mapping process for human activity typically involves the following key steps:
- Do the research, then determining personas for the research subjects and a common set of appropriate “modes.” For example, the modes for a person’s average day might include: family, friends, work, play, rest, transit, etc.
- Determine more specific modes for specific inquiries. Peoples’ relationships with their cars might generate modes like: chauffer, errand, commute, maintain, etc.
- Create sub-modes for the personas that tie into a primary person’s mode. A “parent” may link to a “child” or a “patient” may be linked to a “caregiver”.
- Map the modes against appropriate axes, such as “State of Mind” and “Time” or “Active / Passive” and “Time.”
- Add pressure points, or the fixed demands on individuals, features that do not change (e.g., soccer practice schedule).
- Mark decision points — points where subject has choices.
- Look for patterns across multiple subjects and label them with descriptive terms (e.g., “mad rush”)
- Look for ways to improve transitions and decisions within the key patterns.
Karten’s approach to solving product design challenges resonated with our own approach to discovering user goals and needs. Using visual methodologies to translate research into requirements is a powerful tool for creating successful design solutions.
December 9, 2005, 10:26 am
In an article on A List Apart, Nick Usborne comes close to defining information architecture without ever using the term. Addressing Web designers, he asks:
Now, just pause for a moment and think of all the design choices you have made over the last year, and the reasons why you made them. And think about the huge impact those choices might have had on the performance of the sites you worked on.
Usborne presents a scarily simple usability test to demonstrate his thesis. And counsels designers to act like information architects — that is, to talk to content owners and, even better, to actual users.
And Who Does Not?
Scott Jason Cohen presents the alternative view:
[T]o many, the information architect seems redundant. If the project involves heavy back-end implementation, the system and user flow will already be determined. Click here, go there – the tech people will have already figured this out. In terms of layout, a good visual designer will know not to make a page too damn cluttered. (It’s usually the client that insists on putting 3,000 links on the front page or making the logo spin.)
Cohen’s manifesto is entertaining but misses the mark in some fundamental ways. First, the “tech people” have not already figured everything out. Good back-end technologies are designed for customization; analyzing and diagramming process flows is an important part of our information architecture practice. Second, information architecture is creative. The challenge of organizing complex information for use by different types of users is never solved by prepackaged rules and good information architects know this.
Cohen characterizes information architects as outsiders who interfere with the design process. In fact, information architects and visual designers face the same challenges and work toward the same goals; the best designs come from collaborative practice.
November 10, 2005, 10:54 am
Clearly challenged by the success of Google’s Web applications, Microsoft is repackaging many of the features of MSN and Office into a pair of new online services:
“Windows Live and Office Live will give users much of the functionality of the company’s two most profitable products but without requiring them to install and maintain the software on a computer hard drive.”
Potentially of great interest in this move is the capability of online applications to leverage collaborative use:
“Office Live Collaboration provides 22 small business applications along with tools to let distant users together edit documents in Word, Excel and other Microsoft formats through the Internet.”
November 10, 2005, 10:47 am
The Web Standards Project has put together a one-page rendering that tests browser support for a variety of HTML and CSS standards, PNG transparency and Data URLs.
Here’s the test:
Here’s the explanation:
This focus on standards is admirable. While we code our designs with a less extensive, but very stable set of standards, there are a number of CSS standards that, if implemented on Internet Explorer (to be specific), would streamline almost any coding project.
October 19, 2005, 12:27 pm
Gap, Inc., has relaunched its Web sites (Gap.com, BananaRepublic.com, and OldNavy.com) with a completely new, internally-built e-commerce system. Making extensive use of dynamic HTML, the new system is intended to help customers choose shirts, pants, and other items of apparel in a direct, intuitive way:
“Toby Lenk, president of Gap Inc. Direct, the company’s corporate catalog and online division, said the mouse-overs and pop-up windows eliminated the need to bounce the shopper off her browsing path each time she needed information.
“‘A lot of this was borrowing metaphors from the store experience,’ Mr. Lenk said. ‘When a woman walks into one of our stores, she can process things really quickly. Like when she’s browsing the racks, she takes a quick look at what the sizes and colors are, picks up something and keeps going. We’re trying to let her stay with the fashion.’”
http://www.nytimes.com/2005/09/12/technology/12ecom.html (free registration required)
Frankly, Gap’s new system was neither smooth nor fast when we tried it out. The DHTML shortcuts occasionally failed to respond, and the interactivity lauded in the New York Times article turns the interface into a rollover minefield.
August 11, 2005, 12:38 pm
One “remix” that didn’t make the BusinessWeek Online article mentioned here is a cool Google Maps Pedometer that allows you to overlay points on a Google Map and see the distance they mark. Developer Paul Degnan explains his inspiration for the idea:
“As a runner training for a marathon for the first time, I found myself wishing I had an easy way to know the exact distance a certain course is, without having to drag a GPS or pedometer around on my runs. Looking at Google Maps, and knowing there was a vibrant community of geeks hacking it, I knew there had to be a way. So here it is.”
August 11, 2005, 12:36 pm
A number of major Internet vendors and search engines have made their data and services available to outside programmers. As a result, innovative developers have begun creating new Web applications by adding customized functionality to data derived from one, or more other Web sites. BusinessWeek Online presents a “slide show” of such sites as a (metaphorical) hip-hop soundtrack:
“…hip-hop culture’s mash-ups … combine two tunes to produce an entirely new song. Likewise, hackers are combining the data and features of two or more Web sites, creating entirely new, independent Web mash-ups…”
Page through the slide show using the links in the top right corner of the page (they are not immediately obvious).
July 11, 2005, 1:03 pm
“With most sites, when Web users click on words or a picture, the site’s software calls out to a server to pull data, perform a computation, or show an image. With sites developed using Ajax, the browser loads an engine that draws the user interface and performs the requests for information in the background. The result is software like Google Maps, which lets users pan and zoom around a map of the United States and Canada from continent down to street level.”
While Ajax may be excessively ad hoc for traditional application programmers, it could easily be embraced by Web designers who are already immersed in their own mixed-up world of markup languages, style sheets, and scripting languages.
April 7, 2005, 1:37 pm
Alarmed by Google’s plan to index scholarly texts, French President Jacques Chirac has proposed a national search-engine to handle the job for French universities:
“Why not let Google do the job? Its French version is used for 74% of internet searches in France. The answer is the vulgar criteria it uses to rank results. ‘I do not believe’, wrote Mr Donnedieu de Vabres in Le Monde, ‘that the only key to access our culture should be the automatic ranking by popularity, which has been behind Google’s success.’”
What Mr. Jeanneney proposes instead — rankings by a committee of experts — is not necessarily a bad idea, despite the fun The Economist has with the concept. The key, of course, is in the algorithm that would create such a “committee” out of citations, bibliographies, and other expert resources.
April 7, 2005, 1:34 pm
In this interesting article on Nicholas Negroponte’s concept of a cheap, WiFi-based laptop, the “how” is almost as thought-provoking as the “why”:
“By using 1 gigabyte of solid-state memory to store software and data, ‘We’re thinking maybe you won’t need a hard disk drive,’ he says. And instead of expensive batteries, the $100 laptop could come with less-capable batteries and a hand crank for juicing them back up, like a radio on M*A*S*H.”
February 11, 2005, 2:14 pm
An entertaining New York Times article discusses how computer researchers are trying to develop tools that help users avoid the distractions of email, instant messaging, and the Internet. This may be good news or bad news, depending how you look at it. Microsoft, for example, is working on predictive software that will decide how busy you are and shield you from all but your most important email (and you thought the paper clip was annoying).
http://www.nytimes.com/2005/02/10/technology/circuits/10info.html (free registration required)
February 11, 2005, 2:12 pm
Google has just rolled out a roadmap feature (in beta). Appropriately, Google has taken a different approach from established map providers like Yahoo and Mapquest. Instead of querying for an address first, then refreshing the screen whenever a user changes the location, Google’s application “stitches” map images together, allowing users to pan by dragging the mouse.
January 12, 2005, 2:35 pm
Here’s an interesting inside take on the design implications of the RSS news feed protocol by Wired editor Chris Anderson:
“the Web for me has mostly turned into another text-and-minimal-graphics stream that automatically delivers content of interest, differing from my email only in that it’s not personal and doesn’t require my response. In other words, the age of curiosity or routine-driven surfing may be ending.”
RSS is still a niche technology. Its impact, if it does continue to spread, may not be to end “surfing” but to bifurcate the Internet into two different experiences: one informational and text-based; the other entertainment and multimedia-based. This is hardly a new prediction; the spread of PDAs and mobile devices is a parallel case. RSS simply adds weight to the idea.
November 10, 2004, 3:12 pm
The W3C’s Semantic Web project is an attempt to define the attributes necessary to make Web data usable by database applications as well as people. Now, Sony Computer Science Laboratory is promoting its “emergent semantics” technology as an alternative. Instead of a markup-level tagging system, Sony’s system looks at how content is accessed and shared:
“In emergent semantics, a user’s agent bootstraps the information and categorization of content, such as the classification of music in genres. Through interactions among agents trading ‘favorite’ songs, genres emerge that are common to sets of users. Such emergent semantics as self-organizing genres are automatically tagged onto the content as an extra layer of information rather than depending on people to do the tagging”
The W3C’s Semantic Web home page is at:
October 10, 2004, 3:27 pm
MoreGoogle is a small downloadable program that enhances Google search results with screen images, links and other bits of contextual information. This all happens at the browser level, after Google serves the page:
“…you get the exact, original Google search results. After your browser has loaded the search results, MoreGoogle adds features, but does not alter the results in any way.”
Since the beginning of the Web, designers and developers have debated whether or not content should be separated from design. Up to now, the debate has focused on the extent to which a client program should control design. With MoreGoogle and tools such as PurpleSlurple(TM), Web content is shown to be equally malleable.
June 17, 2004, 3:54 pm
The Pew Internet and American Life Project has redesigned its Web site, making it easier to browse its survey results, charts and tables. A good starting point is the site’s “Reports” page which presents an interesting cross-section of research on Internet demographics, online activies, technology, and other topics.
April 8, 2004, 9:14 am
Google has posted two personalization features based on its bare-bones search engine. “Google Personalized Web Search” can “tailor search-engine results to a user’s specific interests,” while “Google Web Alerts,” is an email alerting tool.
The personalized web search tool allows users to run a “normal” search, then filter the resulting hits dynamically based on a manually-entered profile. It’s easy to try out to see the effect:
Since most Internet users avoid advanced search options, the ability to adjust search results without re-running a search (e.g. with a “refine search” option) could be useful. While requiring users to enter a profile is unlikely to succeed, many Web sites could add baseline filters to their search results pages based on the site’s content and audience.
February 9, 2004, 9:38 am
A recent study of 8,000 subjects reveals that usability is the second highest rated factor in determining a Web site’s popularity — after good content. The issue, then, is whether usability is given sufficient priority in Web site development:
“…designing usability into a product involves first doing an analysis of the user’s needs, and then designing around those needs. If you haven’t done the analysis, you have to redo the design later on.”
http://www.technologyreview.com/articles/wo_pemberton121003.asp (free registration required)
The article points out that Web authoring standards themselves are increasingly based on usability concerns, with a specific example being the Xforms module of the XHTML 2 markup language (see http://www.w3c.org/TR/xforms/).
Unfortunately, the major browsers are still playing catch-up with the XHTML 1 standard. In almost all of the pages we code we use the “transitional” version of XHTML 1 and the transitional phase will likely continue for some time to come.