Initially we only had a keyboard for the command line and text entry. Then the mouse appeared for navigating two dimensional plains of UI. Now the field of computing has a new input toy to play with; our hands. Touch, multi-touch and gestural computing, also known as Natural User Interface (NUI) has become the newest input craze. Excitement around this has even spurred comments predicting the demise of the mouse in the next 3-5 years1. Computer designers (and engineers) have become engrossed with the ability to touch the screen with multiple fingers and control software by waving their arms. However in this excitement, have designers overlooked how to properly engage users and use multi touch to create useful, innovative, and interesting experiences? Perhaps touch and gesture are simply the new shiny objects in the room, soon to be discarded for the next new thing. In my next few articles for Johnny Holland Magazine I’ll look at some of the details of touch and gesture computing and what I’ve learned as a practitioner in the field.
Before I dig in, I want to plug Designing Gestural Interfaces, by Dan Saffer. The book is a great starting guide and reference for anyone looking to get engaged in this field. I’d suggest grabbing a copy if you’re new to the ranks of touch and gesture design.
Touch is but one slice of the pie
Let’s start the journey here. As a designer on Microsoft Surface, we’re uncovering and discovering things as we go. In my work I’ve come to learn quickly that touch, gesture, and NUI are not right for everything. As obvious as this sounds, it’s often overlooked. They should be considered part of an input ecosystem. Each type of input below has unique attributes that make it good for certain types of interactions between users and systems. This is not a comprehensive list, but here are some of the most common input and interaction methods.
• Single Point Touch
• Multi point touch
Each of these methods have pros or cons associated with them. Text input is a perfect example of a task that touch is rather inadequate for. There is no haptic feedback upon pressing the keys, and there isn’t tactile feedback to touch type. Touch also falls short in applications that require precision, such as Adobe Photoshop or Microsoft Office Excel. A mouse would be able to cover ground quicker across the screen and not make the user reach back and forth, as well as more precise in its actions. However when people begin their design of touch, they forget all this, and seemingly everything else.
When not done properly, touch and gesture can appear as a step backwards..
A belief I’ve heard is touch can be so compelling, people will forget the inadequacies, when in reality, it only serves to shine a light on the downfalls of touch. When not done properly, touch and gesture can appear as a step backwards. The (design) problem takes a back seat to the “innovation” of touch. My advice for any designer approached by a client in need of a touch system (holding pictures of Tom Cruise in Minority Report) is make sure to evaluate the problem first. Make sure the interaction fits the needs. Again, the key point is to consider touch as part of an input eco-system, and not view it always as the sole method of device interaction. Not all input methods are equal.
This early thinking has led me to squarely declare that tap is not the new click, which is something I’ve heard thrown around, and anyone who believes so lacks an understanding and respect for how to approach different problems and searching for the best method of interaction between a user and a system.
Systematic approach of gesture integration
Most systems utilizing touch are purely touch based with no addition methods of interaction. This leads to touch being sequestered from other interactions, thus making it more of a user burden to learn. When a new behavior is introduced into a working knowledge system, it can be easier to absorb. In their recent laptops, Apple has taken an approach of incorporating touch into their behavior and input systems by using the track pad. In doing so they have managed to introduce and teach people touch and gesture behaviors in a method users already accept (the track pad). In addition, they are beginning to train people to move between input modes, from track pad mouse, to gesture, to keyboard, depending on the task. These types of associations allow for a better learning and input experience. On the flip side, the gesture actions are secondary to the main system, so they can be ignored fairly easily. It will be interesting to see if this makes gesture and touch easier to adopt, or if people will disregard it.
Top image by pinksherbet