Touch and Gesture systems: what you haven’t heard

When not done properly, touch and gesture can appear as a step backwards..

Initially we only had a keyboard for the command line and text entry. Then the mouse appeared for navigating two dimensional plains of UI. Now the field of computing has a new input toy to play with; our hands. Touch, multi-touch and gestural computing, also known as Natural User Interface (NUI) has become the newest input craze. Excitement around this has even spurred comments predicting the demise of the mouse in the next 3-5 years1. Computer designers (and engineers) have become engrossed with the ability to touch the screen with multiple fingers and control software by waving their arms. However in this excitement, have designers overlooked how to properly engage users and use multi touch to create useful, innovative, and interesting experiences? Perhaps touch and gesture are simply the new shiny objects in the room, soon to be discarded for the next new thing. In my next few articles for Johnny Holland Magazine I’ll look at some of the details of touch and gesture computing and what I’ve learned as a practitioner in the field.

Before I dig in, I want to plug Designing Gestural Interfaces, by Dan Saffer. The book is a great starting guide and reference for anyone looking to get engaged in this field. I’d suggest grabbing a copy if you’re new to the ranks of touch and gesture design.

Touch is but one slice of the pie

Let’s start the journey here. As a designer on Microsoft Surface, we’re uncovering and discovering things as we go. In my work I’ve come to learn quickly that touch, gesture, and NUI are not right for everything. As obvious as this sounds, it’s often overlooked. They should be considered part of an input ecosystem. Each type of input below has unique attributes that make it good for certain types of interactions between users and systems. This is not a comprehensive list, but here are some of the most common input and interaction methods.
•      Keyboard
•      Mouse
•      Stylus
•      Voice
•      Single Point Touch
•      Multi point touch
•      Gesture

Each of these methods have pros or cons associated with them. Text input is a perfect example of a task that touch is rather inadequate for. There is no haptic feedback upon pressing the keys, and there isn’t tactile feedback to touch type. Touch also falls short in applications that require precision, such as Adobe Photoshop or Microsoft Office Excel. A mouse would be able to cover ground quicker across the screen and not make the user reach back and forth, as well as more precise in its actions. However when people begin their design of touch, they forget all this, and seemingly everything else.

When not done properly, touch and gesture can appear as a step backwards..

A belief I’ve heard is touch can be so compelling, people will forget the inadequacies, when in reality, it only serves to shine a light on the downfalls of touch. When not done properly, touch and gesture can appear as a step backwards. The (design) problem takes a back seat to the “innovation” of touch. My advice for any designer approached by a client in need of a touch system (holding pictures of Tom Cruise in Minority Report) is make sure to evaluate the problem first. Make sure the interaction fits the needs. Again, the key point is to consider touch as part of an input eco-system, and not view it always as the sole method of device interaction. Not all input methods are equal.

This early thinking has led me to squarely declare that tap is not the new click, which is something I’ve heard thrown around, and anyone who believes so lacks an understanding and respect for how to approach different problems and searching for the best method of interaction between a user and a system.

Systematic approach of gesture integration

Most systems utilizing touch are purely touch based with no addition methods of interaction. This leads to touch being sequestered from other interactions, thus making it more of a user burden to learn. When a new behavior is introduced into a working knowledge system, it can be easier to absorb. In their recent laptops, Apple has taken an approach of incorporating touch into their behavior and input systems by using the track pad. In doing so they have managed to introduce and teach people touch and gesture behaviors in a method users already accept (the track pad). In addition, they are beginning to train people to move between input modes, from track pad mouse, to gesture, to keyboard, depending on the task. These types of associations allow for a better learning and input experience. On the flip side, the gesture actions are secondary to the main system, so they can be ignored fairly easily. It will be interesting to see if this makes gesture and touch easier to adopt, or if people will disregard it.

Top image by pinksherbet

Joe Fletcher

Joe Fletcher is currently an associate creative director at frog, and previously a design lead at Microsoft. After graduating college in 2001 with a degree in Communication Design, he taught school before moving back into the design field.

6 comments on this article

  1. Interesting article Joe. Looks at touch-input from a different perspective then most of the things I read about it so far.

    The statement that something can be so compelling, people will forget the inadequacies, is applicable to a lot of new technologies. I believe it’s partially true. When something new emerges, which is perceived as being ‘really cool’, people forget about inadequaceis. Though this is only temporary. Once it’s a bit more established people start viewing it from a more critical perspective.

    I’m looking forward to your future posts.

  2. Joe Fletcher on

    Thanks Dennis, I’m striving to write what others seems to not mention 🙂 Working in the field has given me a lot of content, so there will be more posts soon.

    Totally agree on the comment that people *can forget inadequacies at first with new technology, but as it settles in, they become more critical. My issue is I don’t want touch to be discarded because of its flaws as the technology becomes more pervasive. It’s a great method of interaction, it just should be treated in the proper way.

  3. @ Joe,

    I completely agree with you about the the limitations of multitouch. It has been bandied about as the saviour to everything…yet there are intense fatigue issues related to tabletop and wall mounted solutions.

    I do question Photoshop as an example of a poor application, though…Especially when used with a mirrored monitor.

    Most interaction in Photoshop isn’t gross manipulation. It is pretty precise and there are a number of ways to handle these issues, ranging from a virtual “banjo pick” to a magnifier window.

    There are a number of tasks that involve moving one’s arm across a screen, such as drawing long lines. For the most part, these issues can be solved through an efficient paradigm for window sizing, a bimanual interface and a strong contextual menu system that is designed to provide palm support. It’s a shame that these components didn’t make it into the first rev. of the Surface SDK. Showing the potential for productivity apps would have significantly affected market adoption.

    Today, concept illustrators often will shrink windows in Photoshop to increase the accuracy of their linework when they make strokes.

    One aspect of NUI that could make Photoshop really shine is the manipulation of curves. Recently, my team has had great success with a demo that manipulates and animates manipulates volatility surfaces.

    Hope all is well.

  4. Pingback: Bookmarks 18th-22nd February « Love to learn

  5. Erik on

    Thanks for the perspective Joe. I entirely agree that a shiny, new interaction method when applied incorrectly will quickly lose that shine. It’s quite similar to when Hollywood studios try to use computer graphics or animation on a bad script. The foundation is faulty, and no amount of decoration can hide it… at least for long.

    To that end, have you or anyone else run across a matrix of sorts that clearly illustrates the known strengths and weaknesses of the various input mechanisms you mentioned? This could come in very handy for interaction designers to be able to quickly show other stakeholders the more or less promising approaches to take.

  6. Erik: Thanks for the comments! I actually do have a slide I use in my talks that illustrates some of the pros and cons of each input system. The reason I didn’t include it here is because it’s not exhaustive. I use it to illustrate a point during my talks, but it doesn’t cover every pro and con, so I decided to leave it out. I’ll be writing a few more posts on touch, so I’ll see what to include going forward.

    Jonathan: Hey, what’s going on, haven’t talked to you in a while. Interesting take on Photoshop. What grabs my attention based on what you’ve said, is people love to paint on Surface… and other platforms w/ touch. The freeform aspect makes it a lot of fun. However in Photoshop, as you mention, it’s not about gross manipulation, so you would have to solve the fat-finger issue depending on the task (as you point out some methods). It’s all about the situation & context. For some tasks in Photoshop, a stylus would be best, others a mouse, others a finger. It’s a fascinating problem to think how you move between those in a single app.