Phenomenology: invisible interfaces are a myth

Start changing your way of thinking.

Do you know about phenomenology? If you’re an interaction designer, you should. It’s a branch of philosophy that will change the way you work, especially if you’re used to the idea of ‘invisible interfaces’. But it’s highly likely you don’t, as up until now phenomenology has been one of academia’s best kept secrets. I hope to change that by giving you a quick guide to this thought provoking field and its relevance to interaction design.

The basics
Phenomenology is, as you might guess from the name, the study of phenomena. To be exact, according to Stanford, phenomenology is the study of structures of consciousness as experienced from the first-person point of view.

Though it’s a branch of philosophy, it also owes a lot to psychology. In particular, if you know about Gestalt psychology you’ll see a lot of parallels.  However, the most important difference is that phenomenology does not see a person and any object they perceive as being completely separate (for those who know Descartes: the subject-object paradigm). Instead the two are fundamentally linked, we are never just conscious, we are always “conscious of …” something, and so on.

Phenomenolgy is the study of structures of consciousness as experienced from the first-person point of view.

This “conscious of” gets more relevant when Martin Heidegger explores using tools. In an example from “Being and Time” known as “Heidegger’s hammer”, he describes the hammer shifting between being “present-at-hand” and “ready-to-hand”. When we pick up the hammer, it’s “present-at-hand”: we can feel its weight, texture, and perceive it as being something separate from us. Once we start using it to hammer a nail, it becomes “ready-to-hand”: we act through it, and in a way forget that it’s there. Once we stop, it’s “present-at-hand” again. What’s important is that it disappears through use but can always come back.

However, using the body comes to the fore with the writings of Maurice Merleau-Ponty. In his book “The Phenomenology of Perception”, he describes us as perceiving the world as we do because of our bodies (two eyes facing forwards, standing upright etc). What’s more, our perception of our body isn’t necessarily the same as our body itself: when we use an object, it becomes part of our body:

To get used to a hat, a car or a stick is to be transplanted into them, or conversely, to incorporate them into the bulk of our own body. Habit expresses our power of dilating our being in the world… (p 143)

This may seem pretty much the same as Heidegger, but there’s an important difference: the object does not disappear. Instead it becomes part of us. Take the example of using a car. No one who drives would ever say the car disappears. Instead, through sitting in the seat, putting your hands on the steering wheel and your feet on the pedals and starting the motor, it becomes an extension of you. The car doesn’t become invisible, your bodily awareness expands to include the car.

The final aspect of phenomenology most worth touching on is about learning. Heidegger and Merleau-Ponty both agree that there is no such thing as a priori knowledge; knowledge from a higher consciousness (it may sound strange now, but a lot of philsophy assumed this!) and that instead we learn through doing, and in doing so create flexible ways to carry out actions. Hubert Dreyfus describes these as ‘purposeful without purpose’.

How it is relevant
Over the last few years, we’ve seen the way people relate to digital devices completely change. Thanks to the popularity of laptops and smartphones, people are no longer using one eye, one finger and their ears to interact with their computers. (See my previous post for more about that paradigm)

How do we design for this new way of using devices? In the desktop era, many designers used semiotics (the study of signs) to help inform their work. But now, as we use more of our bodies to manipulate computing devices, we need another framework. That is phenomenology. It helps us get away from the idea of the “invisible interface”, and instead look at the how our interactions are “embodied” (Paul Dourish) or “coupled”.

Taking a phenomenological approach, it’s easy to see why the Wii has been such a runaway success: the controller is pretty much Heidegger’s hammer gone digital. (However, it’s worth noting that it is almost a gestural device, which is something different altogether).Portable devices require a bit more thought – unlike Heidegger’s hammer or Merleau Ponty’s blind man’s stick, they have the extra layer of the virtual domain. In these cases, it is how the physical interactions relate to the virtual ones that can be considered by phenomenology.

Schultze and Webber explain that with the Palm, it’s the action of taking it out of its holster:

When you unholster a Blackberry, you don’t need to turn on or unlock the keypad. Perhaps this makes it easier to “act through” the physical device to directly manipulate the data of emails and appointments.

From a more computational perspective, it also helps us understand how we learn things through doing. Hubert Dreyfus, arguably the world’s best interpreter of Heidegger and Merleau-Ponty, showed this in the 1970s when his book “What Computers Can’t Do” correctly predicted that artificial intelligence would be a failure. He used phenomenology to show that assuming people learn using rigid systems of knowledge was wrong.

More information
For those who want to try and find a quick way into phenomenology, Hubert Dreyfus has a number of readable articles (his paper “The Current Relevance of Merleau-Ponty’s Phenomenology of Embodiment” is particularly useful). For more of a design bent, Paul Dourish’s (albeit academic) book “Where The Action Is” looks at social computing and tangible interaction along with phenomenology. Those who are interested in mobile communications should look at Myerson’s “Heidegger, Habermas and the Mobile Phone” (though the book is beginning to date as mobile phones become more like computers).

For those who want to deep dive into phenomenology, Heidegger and Merleau-Ponty are where to go: Heidegger’s “Being and Time” lays the groundwork, while Merleau-Ponty’s “The Phenomenology of Perception” deals with the main area that Heidegger did not cover, namely the body. However, be warned that neither are easy going (Dreyfus suggests that Heidegger is dense to the point of being cryptic and Merleau-Ponty is badly written!). If you choose to go that far, for Heidegger at least there is a great resource to help you: Hubert Dreyfus’s Berkeley lectures are freely available from Berkeley or iTunes.

Vicky Teinaki

An England-based Kiwi, Vicky is doing a PhD at Northumbria University into how designers can better talk about touch and products. When not researching or keeping Johnny Holland running, she does the odd bit of web development, pretends her TV licence money goes only to Steven Moffatt shows, and tweets prolifically about all of the above as @vickytnz.

10 comments on this article

  1. Pingback: Phenom | USiT

  2. Luke on

    Eh, semantics. The “the object does not disappear” line of reasoning was a bit embarrassing, even. That’s not what anybody means by “invisible design”, and if that’s what the author of the article used to get from that… well… not a very bright person.

  3. I’m not sure, if I got this right. But isn’t it just about mapping, affordance and feedback? If the mappings and affordances are correct, you don’t need an interface with labels or signs. And you can easily make the object part of yourself.

    I mean, if you use a hammer, it can only become part of you, if it affords smashing (the movement you would do with your hand if your hand was hard enough). If a ‘hammer’ would require you to say ‘smash’ to push a nail into wood, it might work – but it would irritate and you would probably not develop a sense of being one with the tool.

    The Wii controller on the other hand is an example with a flaw: There are alot of games that really fail at mapping the motions correctly. For instance in the ego shooter Red Steel you have to shake the Nunchuk controller up and down to reload your weapon – that’s OK. But you have to do the same motion to open a door – instead of pushing or pulling which would be a better mapping. So if you are standing in front of a door, but not close enough, you reload your weapon instead of opening the door. Or if you want to pick something up from the ground, you have to move the Nunchuk down. If you get it up again too fast, you reload your weapon instead of picking up things. So even though you start to use the Nunchuck as an extension of your body, it doesn’t work and you always have to think about how you have to move your hand in the current circumstances.

    So I think, what you describe is just the result of good mapping, affordance and the right feedback. But as long as we cling to visual interfaces, displaced input devices (mouse/keyboard) or touch screens with no haptical feedback, that’s hard to achieve.

  4. Vicky Teinaki on

    Some interesting feedback.

    @Luke: while I agree that no one ‘means’ for an interface to be invisible, for designers I’d argue that it has been the default way of thinking for a while, firstly because of the few and rigid methods of interfacing available to us (one eye, two ears, one finger as earlier talked about), and secondly because of the ubiquitous technology movement (microscopic computers everywhere!). I think phenomenology can remind us that that isn’t the case in the same way semiotics reminds us of the difference between sign, signifier and signified.

    @Robert: thanks for your observations of using the Wii – I wasn’t aware of the difficulties in using it, though thinking about it that makes sense. One of the problems with digital devices is as you’ve said – they have to be able to incorporate a wide range of actions. In fact, the Wii does have pros and cons – on the one hand, it offers a far richer set of actions and thus mastery, but on the other hand, may unsuccessfully mimic existing metaphors, thus making it ‘unready-to-hand'(a broken tool).
    Onto your main question. You’re spot on about it being abut affordance and feedback. (A lot of people think ‘affordance’ comes from Don Norman, but it’s actually from Gestalt psychologist William Gibson). The only thing to watch out for is ‘mental mapping’: there’s a tendency for that term to become visual, metaphorical and generally cerebral. Instead, phenomenology says to focus on what you do rather than what you think. Your example about the door is actually one that Dreyfus talks about (though not with a Wii) – your hand forms the shape of the door without you consciously thinking about it.
    Hope that makes sense – your comments have given me a bit to think about.

  5. Hi Vicki,

    I like the article and especially agree with your characterization of Merleau-Ponty. I always considered him the optimistic version of Sartre. I also think you are on target in focusing on the failure of computing to disappear, going seamless and invisible. Gestural interfaces like the Wii offer intriguing potential in controlling the seams involved in interfaces. For an overall view of gestural interfaces, particularly in their relation to ubiquitous computing, I’d suggest reading Dan Saffer’s Designing Gestural Interfaces, 2009.



  6. Pingback: Johnny Holland - It’s all about interaction » Blog Archive » Intuited interfaces: remembering that people don’t know

  7. Pingback: Leapfroglog - links for 2009-04-10

  8. Pingback: hubert dreyfus