Designing alarms and alerts

Related posts:

Warning sign for a road-cleaning machine
Is your design resistant to failure? If a worst case occurs, can the user recover and regain trust in your solution?

This article explores the case of warnings, alerts and alarms, and provides an introduction to the important factors in gaining user attention to failures or critical events – and how to deal with them. As designers, we all would like to focus on the “happy trail” through our system; but as many users will tell you, annoyances and obstacles to a pleasurable user experience is how a system handles errors and important events out of the ordinary.

Alerts—an interaction design issue

Alerts are used to give the user feedback about important events that need attention for some reason. This may mean errors, failures, breakdowns, or important changes that need action—or interaction. The term “alert” is used here to include different types of significant event feedback in order of criticality:

  • Alarms
  • Warnings
  • Cautions
  • Advisory messages

The goal of the alert is to give users the ability to recover from important events. This is basically an interaction design issue.

The goal of the alert is to give users the ability to recover from important events. This is basically an interaction design issue. When we study different cases of system breakdowns, an important commonality occurs. How users respond is often the make-or-break part of the event. Interaction design is a critical factor in empowering the user to do the right thing. Through design, we can facilitate users responding correctly to critical events.

For instance, the Chernobyl nuclear accident in 1982 was characterized by three important flaws in the security system that occurred simultaneously. Functionally, the reactor did not live up to minimum heat resistance requirements. In terms of personnel, the staff lacked sufficient training. But just as importantly, the staff ignored system warnings.

Chernobyl Reactor Accident

The 1986 Chernobyl reactor accident - a case of ignoring warnings

 

Ignoring system alerts is an interaction design issue—proving an important point: the interaction design can cause the chain of events to occur—or, through the right design, break that chain and help users recover from failure.

The interaction design can cause the chain of events to occur—or, through the right design, break that chain and help users recover from failure.

Most of us will never work on systems with such a high degree of “criticality.” But we still for need to design alerts as feedback to important events, and regardless of the level of criticality, the alerts will determine users’ ablility to recover. Thoughtful alert design helps that process; it supports the flow of work. This is why alert design matters.

Make users act, not panic

The first rule of alert design is to allow the display to trigger action, correction or recovery—as opposed to causing panic.

Compare it to a trip down the highway in your car. The kids are in the backseat. Suddenly, the engine compartment catches fire and starts to smoke heavily, with flames coming out the side. You brake quickly and pull over. How do you instruct your kids? What voice would you carry? You would probably use a firm voice, right? A raised voice, but controlled. “We need to get out now. Take it easy. Only through the right hand side door. Don’t run. It’s under control.” Your voice should not cause the kids to panic, only to act, and to act in the right way.

The above analogy is actually used when designing cockpit voice alerts. It is also the reason a female voice is often used in most voice alert settings. Users are somehow more prone to respond calmly to a female voice than with that of a male.

Matching criticality to obtrusiveness

When designing the actual feedback element, we can start by looking at visibility. With different levels of importance, how should the obtrusiveness be scaled? This leads us to another important rule in alert design:

Match the obtrusiveness of the alert to the criticality of the event.

Do not make the alert too obtrusive for trivial caution displays. On the other hand, avoid making the design of important alarms too “silent” to be noticed.

Obtrusiveness can be manipulated through specific design elements. Here are some of the major design aspects to manipulate:

  • Size and placement determine visibility.
  • Colors signal criticality. Need I say more? Use red sparingly, and save it for high criticality alerts and to raise obtrusiveness.
  • Static or dynamic. The eye catches movement. Raise obtrusiveness with animation and movement in general.
  • Sound/voice. Sound is crucial. If the user is not looking at your display device, the will not notice the alert. This is why most alerts are also audible.
  • Repetition: User might not catch the first alert.
  • Permanence: Consider the case of missed alerts. User might not have noticed the alert the moment it occurred. By raising the amount of time the alert is displayed, obtrusiveness can be raised.

An example of intelligent alert priority design is the cockpit warning and caution unit. Alert criticality is distinguished by sound, placement, color and size, yet integrated into a single sorted “inbox” concept, displaying all alerts in one location.

A380 Cockpit

Cockpit of an A380.

Dismiss and act—or interact

Depending on which system provides feedback, the user might have options to interact, or just options to “dismiss and act accordingly.” In less critical alerts, provide options for interaction: “Fix”, “Retry”, “Close valve”, “Show list”. This keeps the locus of control with the user and builds trust.

In alert dismissal, there are no particular actions to feed back to the system. In this case, combine the alert with a suggested action: “Engine is on fire. Exit car to the right.

Cry Wolf Syndrome

The most common pitfall in alert design is to handle obtrusiveness and repetition incorrectly. If the alert is displayed too frequently, the user will become blind to the alert and will then ignore it. It also causes trust to be lowered for other types of alerts. “It does this all the time. I never listen to it anymore.”

The best way to avoid Cry Wolf Syndrome is to make sure alerts are easily dismissable, only occur and repeat when necessary, with the lowest possible interval, and are as obtrusive as their criticality warrants.

Make every alert count.

One design solution to Cry Wolf Syndrome is to create display permanence with less obtrusiveness for the same alert. This can be done automatically “docking” the alert as opposed to repeating it. The user will not be forced to interact with the alert—and possibly dismiss something of importance. Another strategy for avoiding the pitfall is to simply avoid the alert alltogether. Consider if the action or alert is necessary. Can response or interaction be automated—or avoided? This will typically prove to be the best solution to fight user blindness. Make every alert count.

A380 photo by Adam CC BY-SA 2.0

Mikkel Michelsen

Mikkel Michelsen is an interaction designer with 10 years of experience in designing digital user interfaces. He has worked on projects in both the US and Europe, designing systems for industrial use, aviation and financial trading. He is currently employed at www.systematic.com, a CMMI5 rated company building mission critical software for healthcare and defence, where he designs user experience in the area of digital mobile defence applications for troops in the field. He has pioneered the design of a tablet-based touch screen battle management system for mobile ground and naval units, extending it into soldier-worn systems.

One comment on this article

  1. IK Osuji on

    Great article. A lot of food for thought. Where can I find more insights into alert design?