In my first article I laid some ground work discussing touch as part of an ecosystem and the matrix of inputs available for people to interact with systems. In this article we’ll look at making things a little more complicated by adding in different technologies, form factors, and the cardio effect of touch.
Even when narrowing the input method solely to touch, designers are still faced with a complex matrix of issues that they may not yet be aware of. Let’s consider orientation the first variable in this equation. Are you designing for a horizontal, vertical, or tilted system? In the media today, such as CNN, we mostly see vertical touch walls. A few years ago you could barely find companies developing these, and now the field is fast filling up with companies using different technologies, sizes, price points, and interaction techniques to develop their systems. Until touch walls are really commoditized, I don’t see them moving much beyond novelty and “cool” factor. Not that I’m complaining, anything that gets touch more into the public eye is good business for me. Although as designers we must not become jaded to the fact that most people have not used touch walls… or touch computers beyond an airport kiosk or ATM, despite what we may think.
Continuing on vertical systems, I’m not sure how many readers here have tried one, but while they are cool for show-off factor, there is one key piece of information people forget. Holding your arms up to use a vertical touch system makes you tired. It’s not easy, and not much fun after a few minutes. Hence why none of these technologies are made for serious personal use at this point. Of course the other aspect besides the cardio work out, is accuracy. When holding your arms out, people tend not to be super accurate because they must fight gravity. A horizontal system can produce better accuracy in that respect [although there is still the fat-finger concern], but may have issues from wrists or elbows acting as accidental inputs. Extrapolating some basic starting points, use Fitts’s law, make things large, try to determine accidental contacts, and utilize transient tasks for vertical orientations.
All that in the simple aspect of vertical and horizontal, and we haven’t even begun yet. In addition to orientation, designers must start to think about technology and form. With Windows 7 introducing touch, designers will now see touch becoming a more common interaction commodity on personal computers. Looking at the mouse and keyboard, the positive aspect is they’re standardized technologies. A mouse is always represented with a cursor, the same way every time. The keyboard has the same keys and same layout [within a given culture]. Despite different technologies and hardware manufacturers, there are constants in these tools. Unfortunately touch, manipulation, and gestures do not get the same ease of consistency. Consider the chart below.
This is a small portion of what is available on the market for touch PC’s. Since there isn’t a standard way to provide touch input, different companies are presenting their methods in new and separate proprietary ways. This means tough problems for designers of touch and gesture working across various systems; in other words, anyone designing a touch application for Windows 7. In addition to the basics, like the number of simultaneous touch inputs systems can accept, the difference in technologies means all touch inputs are not created equal. A capacitive system, like the iPhone, relies on energy generated from a person’s body, it needs a person to make a touch. An infrared camera systems, like the HP Touchsmart, relies on having an object break a “blanket” on infrared beams. This means an object (not necessarily a person) can create a touch contact point. Additionally, varying numbers of simultaneous touch inputs means gestural interfaces can be more difficult to design and develop. Adding to these issues, people who design cross platform applications must consider the ability for direct touch on Windows 7 and indirect track pad gesture interactions on an Apple MacBook.
I wish I could offer great suggestions or solutions for how to tackle these problems, but we’re searching for the answers as well. For now I’ll just have the leave with the thought that designing systems for touch will get harder before they gets easier, and I look forward to it. New challenges, new interactions, and adapting the worlds behavior to a new type of input. We have some of the toughest problems around. This isn’t the mouse, this isn’t the keyboard, this isn’t controlled. It’s design in the wild, sometimes out of our control… how will we all work to solve it?
For additional discussion and insight on these topics you can view my interview for the MSDN Channel 9 site.