From now on we’ll be sharing with you some of the videos we’re collecting on Johnny TV. For this first time we would like to show you Don Norman’s lecture from the Stanford’s HCI Seminar lecture series. In it he talks about some of the things that he covers in his book, ‘The Design of Future Things’.
Norman talks about the fact that ‘intelligence’ is increasingly being built into our cars, our appliances, and many other things that we interact with. But he points out that this ‘intelligence’ is not very good at handling the unexpected. As designers, we try to anticipate every possible eventuality, but we can’t anticipate everything. The result is that the very behaviors that are designed to save us from ourselves lull us into a usually-true-but-sometimes-false sense of security. Things go wrong less often, but when they do, neither the system nor the user is prepared to handle it.
I greatly enjoyed this lecture and got a lot out of it. Norman is an entertaining speaker who illustrates his ideas with lots of compelling examples. He doesn’t offer any hard-and-fast rules for when automation should be used and when it should be avoided, and rightly so, in my opinion. These matters are far too complex for simple rules.
The most important thing I took away from the lecture was this: automation is great when it is self-contained (think Roomba). But when control is shared between the user and the system, we must be very careful, and think about appropriate levels of automation and feedback (as well as the apparent precision of the information that the system provides).