by Malcolm Garret
Over the years, we have become accustomed to interacting with digital media only indirectly, via the spatially disconnected mediation of screen and keyboard. But now, as digital technology becomes invisibly embedded in everyday things, the “feeling” of everyday things is also increasingly becoming embedded in digital technology. “Real” objects are becoming more important and are set to redefine our relationships with digital information.
Bill Moggridge, the IDEO founder and laptop pioneer commonly regarded as the father of modern interaction design, has recently published Designing Interactions (MIT Press) a weighty tome in which he talks to pretty much everyone who has played either a direct role in, or influenced, the design of the ubiquitous desktop computers we are all now familiar with. Only twenty years ago such devices were largely unloved and somewhat unusable. Designing Interactions is packed with great interviews and astute observations, and is a comprehensive account of how we got to where we are. Yet in the time since the publication of this book interaction environments have taken several leaps forward as technology gets smaller, and increasingly portable.
So I was keen to solicit Moggridge’s opinion on the advances being made to dissolve the separation between hard and soft interfaces. Recent developments include the multi–touch screens of Apple iPhone and Microsoft Surface; gestural interfaces with haptic feedback such as Nintendo Wii; the merger of real and digital information in augmented reality; and large spatial interactive systems, such as cyber/Explorer, a real–time virtual debating environment linking universities in Montreal and Paris, which I helped design while at I–mmersion in Toronto.
In conversation with Moggridge, I mentioned that like all designers of my generation I used to work with a large conventional drawing board. Following the purchase of my first Mac (1987) with its tiny screen, it occurred to me that it would be fantastic if you had a screen which was much larger scale, of comparable size to the old drawing board, but which had all the advantages of a computer, such as being able to layer and zoom what you’re working on.
“I think that big screen stuff still seems far away in terms of actually becoming realizable,” replied Moggridge, “although there is some nice work being done, including a Tangible Interface project called ClearBoard by Hiroshi Ishii at MIT.”
“We did a concept in 2000 for Business Week magazine where we looked at the future and one of the projects was for a horizontal large screen drawing board which you could use by writing on it or by touching it. As a concept that is absolutely valid, and always has been, but the realization of it into something that is credible in terms of price and performance is still some way away. I think ‘e–ink’ is a technology that might realize that board size. If you want legibility and clarity you have got to have more detail. I think the crossover point of human perception is something like 230 dots per inch, and if you multiply that for a big board size, the number of pixels needed is pretty demanding. We need a few Moore’s Laws generations [of exponential miniaturization] before we can make that possible.”
I wondered aloud whether young people, and future users, might need interaction design that spoke the language of their generation, and reflected opportunities that they may have perceived but the old guard are not yet aware of.
“I don’t believe in the generational thing being quite as important as you are implying,” said Moggridge. “Just looking at my own personal experience as a designer: I studied design using the techniques that I was taught, but they went out of date extraordinarily quickly. Drawing for example, required gouache, watercolor and airbrush—along came the magic marker, and I stopped using those! And then along came the computer, and I stopped using magic markers. At least three times the simple skills I had were replaced. So although a generation will acquire the latest stuff intuitively as they grow up, they will still have to accommodate many more big changes in their lifetime.”
But isn’t the move from gouache to markers to drawing with a mouse a relatively logical step—simply doing the same sort of thing more efficiently? The move from 2D drawings to video or time-based and/or 3D environments feels likes more of a conceptual shift. My immediate thought, when I bought my first computer, was that I would be able to do what I used to do better and faster. But I soon found that my new drawing board was not just a production tool—it was also an authoring environment and a publishing platform, too. Even with that small step, there was a profound difference between what I expected and what I subsequently found.
“That sort of convergence continues to happen,” said Moggridge. “Take the relationship between computing and TV, for example. There are two important things there: one is the nature of the resolution, and the other is the nature of the interactivity. And if you look at the resolution, then HDTV does it by definition. If we get up to that level then that’s enough, that’s better than you have on a regular computer screen. As soon as the TV gets as many pixels as you have in computers then there won’t be a resolution conflict. So when you can see text on the TV as clearly as you can on the computer they become the same medium. The other main point though, is about interactivity. People think of TV, or they have done, as ‘read only’ while the computer is an interactive medium.”
And there’s a new generation who, through playing with PlayStation or X-Box during their formative years, expect to be able to control what happens on the TV, rather than simply choose between pre—determined alternatives.
“Yes exactly. And when the TV input is as fluid and as good as you find on a computer then that world collapses as well,” said Moggridge.
Not to mention that you can pick it up and move around with it as well, and that it facilitates user–creatable content in a variety of media.
“I would have thought the user interface principles are fairly simple. If they are young enough to have experienced that when they started that will become the norm. But then even old farts like me will be able to manage that convergence.”
But my point is that there are new social behaviors coming into play, and we’re seeing technology being put to uses that are fundamentally different to were expected—and some decidedly creative misuses. In delivering new options, technology also suggests that some problems are now not so relevant. Why worry about the size or feedback from tiny PDA buttons if you have direct access to the entire screen?
“Well the interesting thing there is that for me the PDA version of the mobile phone was always more likely to be easier to use than the cell phone. The big question about iPhone is whether the lack of tactile feedback will allow it to compete.”
Is it logical to expect that iPhone could become the laptop of tomorrow? This kind of direct screen input is what I have been waiting for since giving up my drawing board.
Moggridge is one of many designers who can show the way patterns and archetypes from product design now frame new ways for people to deal with electronic information, and the many new developments at the collision point between “real world” objects and digital interfaces. For example, in describing the Wii, Ross Phillips (Head of Interactive at SHOWstudio) observes that “the interaction feels right, but given the broader range of sensory information available designers need to re–think and re–purpose existing feedback mechanisms. The slightest discrepancy between your physical movements and what you see/feel/hear can disrupt whatever experience you are engaged in.”
Design is now more concerned with human–computer interaction, where the issues are increasingly about behaviors and human factors rather than merely about technology, functionality or aesthetics. At Apple, now that their operating system is truly mobile, there is a renewed focus on interaction and connectivity rather than the elegance of the hardware.
The significant distinction between the iPhone and other mobiles is that, like the iPod, it is not simply a gadget, but a seamless and thoughtful extension of the Mac Operating System that you can carry with you anywhere. It recognizes the need for functional limitations and does only those things that are desirable in a mobile environment. Any application that is not 100 percent necessary is confined to the computer at home, and anticipating the ubiquity of Wi–Fi, it maintains your connection with a much wider network. With that one fundamental piece of logic it succeeds over the competition.
But where it really excites is in that step towards my old ideal of a large–scale screen which can be addressed directly rather than via keyboard or mouse—which are somewhat lacking as tools for creativity. I can envisage a light, iPhone–like screen the size of a laptop, without a separate keyboard, that is instantly responsive and portable. Like many innovations, it starts with what appears to be just one small step.