Motion and The Clay of Interaction Design

Related posts:

I am in constant pursuit of the “clay” of interaction design (IxD). Even if that clay is intangible, if we are to consider ourselves a true design discipline there must be something that we are manipulating. Once we understand what it is that we are manipulating we will be better able to communicate to all our stakeholders the intentions of what it is the interaction designer designs. One possible property of said “clay” may be motion or movement.For almost all interactions we place our body in motion. Even speaking requires muscles to move in order to work. There has been a ton of work done on motion as an aesthetic quality towards an audience, even if that audience is just perceived. What I’m interested in is motion as an aesthetic regardless of perceived or real audience. The question I ask is if certain movements just feel better than others at an aesthetic level and further that perception is manipulated by other interacting factors.

The Foundations: A Recap

A couple of years ago I started positing that there are foundational elements to IxD. If we are to discuss material and medium in IxD there must be properties that we can use to describe and differentiate and even qualify what it is.

Why I pursue foundations as a concept is strongly influenced by Roweena Reed Kostellow (founder of the Pratt Institute’s Industrial Design Department) and her six foundations for three-dimensional design: line, luminance & color, space, volume, negative space, and texture. It isn’t just that these foundations exist for their own sake. They are a basis for two important requirements for the education and practice of design—educating craft and a basis for criticism.

Another growing influence is the work of Bill Verplank. His three areas of concern for the interaction designer are articulated beautifully in his video taped lecture he gives in Designing Interactions for his former colleague Bill Moggridge. In it he suggests that the Interaction Designer is concerned with three things that all start with “How do you …”. It’s worth the watch.

What is unclear to me from Bill’s explanation is how do I manipulate things to achieve the outcomes that he describes. Further it seems that he is only discussing the end result or point of interface that people interact with. This did not seem to map against my idea of what interaction design is. For me interaction design supports the interface by defining both the desired behavior of a product or service and the desired behavior of the people who will interact with that system.

So, with this in mind I’ve been working out a collection of foundations that I believe make up the “clay” of how to do just that. I have three original elements: Time, Metaphor, and Abstraction.

  • Time is in many ways the most multi-facetted of the three. It breaks down into the following attributes: pacing, rest, duration, frequency, attention. These properties all combine to create a relative sense of time amongst people using the system, the same way that one experiences anything.
  • Metaphor is related to what Richard Buchanan calls the “Poetics of Design”. It is the way we need to use analogy as the bridge between the intangible complexities that are forged through digital technologies (and other complex intangible and abstracted systems such as services) and the tangible world where our senses and cognitive abilities evolved to embody.
  • Abstraction is really a value property. It relates to combined physical and cognitive activities that takes place to initiate an activity and when it is perceived to have been occurred.

The rest of this article though is about a new type of foundation that I alluded to when I presented at Interaction 09 on motion or movement.

Background on Motion

We are using a larger variety of motions with our primary computing devices than ever before. The devices are in motion like when we shake an iPhone to initiate an undo, or we are in motion & our devices can sense the movements we make. The previous tap which mapped almost exclusively to a mouse-click has been extended with new gestures like pinch, flick and swipe. Like the ubiquitous mouse-click there are a variety of contexts where these gestures are used changing their meaning, and emotional contexts. Mouse down, move, mouse up is commonly called “drag & drop”. How we combine movements within specific contexts can effect how we interpret their interpersonal meaning and the feelings we have associated with them.

One aspect of motion and movement comes from dance and martial arts. I love to dance and I used to practice both Tae Kwon Do and capoeira (two fairly different martial arts). Dance and martial arts requires a practitioner to be fairly aware of how they move in the world. Yes, you can say this is about balance and agility, but it is also about understanding what brings about balance and agility. It also forces you to understand your place in the world physically compared to everything around you. To me, this spatial awareness is to motion the equivalent that attention is to time.

spatial awareness is to motion what attention is to time

I spent more focused attention on my practice of capoeira as an adult. In doing so I realized quickly that how I felt emotionally doing a movement directly correlated to whether or not the movement itself was successful. On watching capoeira I noticed similarly as an audience member that beauty occurred within the success of those playing (you play capoeira instead of fighting it because of its history as a covert mechanism to learn how to defend yourself within the context of being a slave in Brazil.)

Compare the act of moving a file from one container to another with the act of panning a map. In this example the motion is almost the same but there is a clear difference that effects the aesthetic quality. The level of precision required for panning a map is substantially less than that of file-folder management depending on the level of graphic resolution and other factors related to Fitt’s Law. The motion of panning can in fact have a comparable flick like quality to it, especially when the user knows they are several lengths of motion away from their desired target. Targets themselves are usually approximations as well. Applying Fitt’s Law to this activity, an approximate target has a cognitive equivalent of just being a fairly larger target than an absolute target.

The Case for both good & bad motion design: Twitter for iOS

What got me to return to thinking about motion almost two years later was my own impressions using the newly released Twitter for iPad app and comparing those to my other iPad and iPhone apps I use. Specifically, there are new gestures introduced by the designer of both Tweetie for iPhone (now Twitter for iPhone) and Twitter for iPad, Loren Brichter formerly of Atebits.

On Tweetie, Loren brought to the iPhone world a whole new gesture. Playing on the existing metaphors of gravity & friction in other iPhone gestural interfaces, he used the existing playful springiness at the end of a list as a spring-loaded trigger to call for a refresh of the results of that same list.

The now-standard spring refresh on iPhone

The now-standard spring refresh on iPhone

This first gestural innovation was so successful that a host of other applications have taken it on as their primary means of refreshing a result list. For me the adoption of the new gesture so permeated my standard use of my iPhone that I now expect this gesture to be available in every app that I use. That is a pretty successful independent major UI paradigm to design.

When I opened up the new Twitter for iPad app, I was ready for some goodness because of all the hype I read before I downloaded. It is very well designed and is completely different from its iPhone sister. It takes advantage of the unique properties of the iPad. (For those not familiar with the app, the motions are all shown below).

The new iPad app puts the details of a single tweet in a right column, but instead of putting an “X” icon or other “button” to close or collapse the detail view, Loren invented a new gesture/action combo where the user swipes (a common gesture for deletion) to literally push aside the right column, which disappears for portrait view and squeezes it and clips it in landscape view. In so doing he both creates a new motion gesture and uses that new gesture as a means of reducing abstraction through what appears to be a tangible equivalent of pushing aside a pile of paper on your desk. (Yes, it is also an abstract metaphor and also has attributes of time associated with it.)

Clipping Columns

Clipping Columns

Understanding Aesthetics of Motion

Using all these apps I began to get new critical thinking that I could apply to the foundations I mentioned above. Whether it is the original flick-scroll that Apple designed with the launch of the iPhone, or the spring-refresh, or the swipe-dismiss there is a commonality for how the gestures are engaged. The movements share a lack of control and/or precision. This has as much to do with the size of the targets as it does with the complete lack of target for ending. These free-ending gestures work because of their ease, but also because of the extended range of motion creates an aesthetic quality to them that more precise and controlled gestures do not. In turn they add to the overall aesthetic quality of the interface around feelings of play & personal satisfaction.

I’ve also noticed some other key areas when using my iPad that have, compared to my iPhone, triggered similar emotional responses due to gestural differences. In general, scale of motion adds a lot aesthetically. As in dancing, extensions are just more beautiful.

The area that I find really different is in typing and general tapping. When I compare the typing experience on my iPhone to my iPad I notice the difference greatly. To really feel it open an iPhone app that requires data entry. Normally though we type on an iPhone with the single finger peck or by thumbing. I’m a big thumber. Even when in the correct form factor (and I’m pretty good at thumbing on my iPhone) the feeling of being more constrained & swaddled is there when compared to the openness & bounce you feel when typing on an iPad.

I’ve equated this feeling to the scene in Star Trek Generations when Data, with his new emotion chip, is singing while tapping away on his glass console screen. I’ve felt this so strongly that I’ve even been searching for a Star Trek console wall paper. I’m also constantly singing Data’s refrain when using my iPad, “Life forms. Tiny little life forms. Where are you? Da da da Da!” (Star Trek: Generations. 1994).

The added scale of space allows one to almost feel like they are dancing with their two hands on a glass dance floor.

When looking at any system of evaluation it isn’t only important to look at what works, but also understand what doesn’t work. My example here also comes from the iPad Twitter app. It has 2 other gestures that are applied to new outcomes. Both are related to revealing something in a new context without any visual cues that it is there. Like the swipe to reveal actions in the iPhone app.

The first of these is is two-finger gesture. With two fingers target a touch holding it down and swipe down. If there is a conversation related to the targeted tweet then it will reveal itself.

iPad Twitter Replies—two finger swipe

iPad Twitter Replies—two finger swipe

The other one also requires two fingers. It uses the the reverse pinch to reveal the detailed view of the tweet.

Pinch Open

Pinch Open

Without going into why we need these gestures (I kinda feel they are “easter eggs” more than really usable functionality), they both have properties that lead to their lower performance or evaluation.

First, because they are two-fingered gestures it is less likely that a person will discover these behaviors accidentally. People do not use two fingers regularly accept in specific contexts that are well understood like zooming. For example, I was recently struggling to figure out how to scroll an inset frame without scrolling the surrounding container. It never occurred to me that I should use 2 fingers to scroll. When hearing that, I thought, “that’s messed up” and I tested it on 5 avid iPad users who all failed to figure it out as well and all complained that they were having the same problem.

The second problem is more about the reverse pinch activity then it is about the downward two-fingered swipe. With the reverse pinch the amount of fidelity required to do it is just too high. While the ending point is unimportant there is something about how to start the gesture that might require more precision and higher resolution than the system can handle consistently. For the two-finger swipe down to reveal the conversation, the difficult part is that you need to remember to keep your fingers on the glass or it will disappear. This leads to the constant repetition of the task lowering its utility. It is just easier to tap once on it and have it reveal itself that way.

How to Design for Motion

So what does all this mean for me?

First it means there is a huge opportunity. Loren made a huge name for himself as an accomplished iPhone designer/developer by innovating a new gestural paradigm. It catapulted his app into the limelight and eventually got him “acquired,” in this case by Twitter itself.

I don’t know what methods Loren used to come up with his spring-refresh design, but I can look at the work of Kicker Studio and their case study they published for the gestural TV remote control they designed. What is clear is that sketching & prototyping now requires a new methodology. We all need to learn to become solid actors if we are going to design interfaces that require the user to move in new ways outside of buttons, pointing devices & keyboards. When it comes to mobile devices and touch screens especially, we need to all become actors.

I’m reminded of the case study that Bill Buxton wrote about in his amazing book Sketching User Experiences on how the Palm Pilot was designed. They used a block of wood & a cut off pencil and played with various forms & felt how various gestures would play out. Binging our prototypes into the physical is going to be key as we design for mobile gestural platforms. We are going to have to act out scenarios of use, dance out gestures to complete new choreographies. We need to see gestures both as dancer and as audience.

One of the reasons these gestures work is also related to the visual cues for all the states of availability, direction, activity and completion. Rehearsing the gestures in front of others will cause people to ask questions like how do you know it will do something? And how do you know it when it is complete?

Gestural interface design is still very new. We can deeply appreciate the work of Apple , Microsoft, and Google in their leadership efforts but there are still lots of opportunities in this area to innovate even more. Having an understanding of all four of the foundational elements of interaction design will help you design more solid interfaces & interactions for better overall experiences.

Concluding Thoughts

I am cautious about adding this as a foundation of interaction design because it feels like it might fit within the context of “interactive design” or “interface design”. For now though I believe that there is a behavioral property that moves beyond the point of interaction itself towards embedding behaviors within human beings that become embedded culturally. The motions themselves then become akin to affordances of there own even though they do not connect to any visually perceived markers. They just become expected on one hand and they imbue an emotional aesthetic all their own.

Foundations of Interaction Design article on Boxes & Arrows, and related podcast,
Revised article on Johnny Holland
Interaction09 Motion and Movement video and slides

David Malouf

Professor of Interaction Design at the Savannah College of Art and Design