Debunking the Myths of Online Usability Testing

I love the TV show Myth Busters because it challenges what I think is true. In the show, popularly held myths are tested in an entertaining and somewhat scientific way. My favorite part of the show, other than the explosions of course, is when my beliefs turn out not to be true. This always keeps me open minded, and focused on reality. I also enjoy being on the other end – exposing myths as unfounded. This is the perspective I am taking for this article. Specifically, I will be highlighting five common myths related to online (unmoderated) usability testing, and explaining why I think these myths are not true.

The motivation for this article is to help UX researchers keep an open mind about online usability testing. There are some researchers who have been using this approach for years and find it useful (in certain situations). Others are new to it, and wanting to learn more about its strength and limitations. Finally, some UX researchers have already formed an opinion about online usability testing, and deemed it not useful for a variety of (unfounded) reasons. I hope by exposing these myths, we (as a UX community) can evaluate this tool based on its actual merits.

1.    There is too much noise in the data to be trustworthy

Perhaps the most common myth about online usability testing is that the data are not very reliable. Some people will say that the participants don’t try very hard, get distracted in the middle of the study, go through the study as fast as they can to get the incentive, or even misrepresent themselves to qualify for the study (in order to get the incentive). This is certainly true, and generally occurs in 5% – 10% of all cases. But, there is good news!

There are some very useful techniques to clean up the data. Identifying (and removing) “mental cheaters” is not very hard. These folks behave in highly predictable ways. “Flat lining” is when someone answers all 1’s or 5’s on a 5-point rating scale.  “Speed traps” are one way to identify these folks (see question 6 in the figure below) by checking to make sure they are actually reading each question. Consistency checks can be used by re-wording questions, and analyzing the consistency of responses. Removing unrealistic task times is also very easy to do, and can help deal with the participants who went home for the weekend in the middle of a task. Also, screening questions can be written to minimize the number of fraudulent participants (those that misrepresent themselves) who take part in the study.

Bottom line, the data from an online usability test can be just as reliable as a traditional lab test. The only difference is that a little cleaning up needs to happen before jumping into the analysis phase.

2.    You can only collect quantitative data

I am always surprised when people tell me that online usability testing is only useful for collecting basic metrics such as task success, completion times, or satisfaction. For some reason, people assume that just because you have a large sample size, you can only collect quantitative data. Online usability studies just don’t make sense if you want to collect qualitative data about the user experience. This cannot be farther from the truth.

One of the beauties of online usability testing is that the researchers can collect a rich set of both quantitative and qualitative data about the overall user experience. For example, if a user has difficulty with a specific task, they can provide comments as to why they had difficulty with that task, or what they were expecting to happen at different points along a transaction. Qualitative data, usually in the form of verbatim comments, can be collected at any point in their experience. The user can even be prompted to provide verbatim comments when they exhibit certain behavior, such as abandoning a transaction or deviating away a desired navigational behavior.

Verbatim comments are not only easy to collect, but they are becoming much easier to analyze. There are many tools available now to pick out patterns in verbatim responses. Word clouds (see below) are one example to way to get a quick sense of the key patterns in verbatim responses.

3.    Online usability testing does not work well during the design phase

Most people think of running an online usability study only after the product has been built. It only makes sense to run an online usability study as part of a summative evaluation, and doesn’t really fit in during the actual design phase. Online usability testing can’t inform the design, takes too much time, or is too expensive to conduct during the design phase. These are all untrue.

Online usability studies can be set up within a few hours, and data collected in a matter of hours. We have set up, launched, and analyzed data from an online study within the same day. Not only can a study be set up quickly, but it can help answer questions that typically come up in the design phase. For example, if there is a question about preferences around navigation method, labeling, or look and feel. These and many other design preference questions cannot be reliably answered with a small sample size. Sometimes, we need to gather data quickly from hundreds or thousands of users in order to validate significant design decisions.

4.    It only works with websites

One of the things that researchers quickly point out is that online usability testing can only be used for evaluating websites. It simply doesn’t work for software, voice response systems, mobile, consumer electronics, and toaster ovens. While a vast majority of online usability testing does focus on websites, it doesn’t have to.

Over the years, we have used an online usability testing approach to evaluate non-web interfaces. Conceptually, it is still the same method. The participant is asked some questions, given some tasks, and provides feedback about their actual experience. The only real difference is that the participant’s behavior is not being tracked. But, a lot of useful data can be collected about their experience, such as whether they were successful in completing their tasks, how long it took them, what they felt about their experience, and where they had difficulty. So, even though you might be giving up a little behavioral data, there is still a lot to be learned about their experience.

5.    Online usability testing costs too much

Some people say that they would love to do more online usability testing but it is simply too expensive to use on a regular basis. It certainly can be pricy, but fortunately, in the last few years there are a variety of tools that allow you to run online usability studies for not a lot of money. There are various self-service providers to allow you to set up and run your own online usability study. You only need to pay to access their technology. Visit to see a complete listing of vendors.

If budgets are really tight, there is a way to run your own “discounted” flavor of an online usability testing for free, or practically free. By taking advantage of some online survey tools (such as Survey Gizmo or SurveyMonkey), and a little html and java script, you can literally create your own online usability study for close to free (see the figure below as an example of the “homegrown” approach). While you will give up a little data and functionality, it can be useful in those situations when you have no budget to run an online usability study. Visit to see more details about how to create a discounted online usability study.

Try it for yourself

I may or may not have convinced you that these myths are untrue. Regardless, I would encourage you to consider online usability testing as part of your user experience design and research efforts. In doing so, you will discover its’ strengths and limitations. After all, every user research method has its own strengths and limitations. Online usability testing is no exception.

Editorial note: Interested in learning more about this subject? Bill recently co-authored Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies.

Top image: Brad Montgomery / cc

Bill Albert

Bill Albert is Director of the Design and Usability Center at Bentley University, a user experience design and research consultancy supporting clients worldwide. He recently co-authored (with Tom Tullis and Donna Tedesco) the first-ever book about online usability testing (Beyond the Usability Lab: Conducting Large-Scale Online User Experience Studies, Morgan Kauffman, January, 2010). Tweets under @uxmetrics.

10 comments on this article

  1. Pingback: User Experience, Usability and Design links for April 9th |

  2. Pingback: Most Tweeted Articles by Product Management Experts

  3. Pingback: Mal kurz rundgeschaut… #20 |

  4. Pingback: Most Tweeted Articles by Product Management Experts

  5. Pingback: Putting people first » Debunking the myths of online usability testing

  6. Jacob on

    Great article. Great breakdown.

    I have to agree a lot of usability services are overpriced, but I think this will change in due course. We’ve recently introduced our own usability testing service at at $9 a test, which we think is really reasonable and works well for freelancers or smaller design studios.

    A lot of our job is educating clients about usability testing, and the value that can be gained from in. In that respect, articles like this are invaluable. I think people are starting to have more awareness of what the web could be like if usability and the user experience have more of a focus, and I’m really hopeful this trend will continue long term.

    Thanks again for the great read.

  7. Thanks for bringing up speedbumps in surveys. I do something similar, but in a slightly more subtle way: I ask the same question twice, but introduce a negative in one of the questions so that in order to answer both honestly, the respondent will have to answer these at opposite ends of the scale:

    “Was it easy to find the information you needed?”

    “Was it difficult to find the information you needed?”

    Now I’m wondering which of these two ways is better – or do they complement each other. Perhaps your straightforward question actually provides more accurate feedback. Opinion?

  8. Hi Eric,

    Good point. We actually call these consistency checks in our book. They are quite effective as well. In fact, one of the side benefits of SUS is that it can be used as a consistency check given the alternating positive/negative worded statements.

  9. Jane Mula on

    Good article. But here is a block I had to reading it smoothly – the use of the word “verbatim”, as in “…Verbatim comments… verbatim responses….” I could not figure out what word was actually meant. No user can give you feedback “verbatim” unless they are quoting someone/something else and doing it “verbatim” – word-for-word identical quote.

    verbatim |vərˈbātəm| adverb & adjective
    in exactly the same words as were used originally : [as adv. ] subjects were instructed to recall the passage verbatim | [as adj. ] your quotations must be verbatim.
    ORIGIN late 15th cent.: from medieval Latin, from Latin verbum ‘word.’ Compare with literatim .

  10. Pingback: The User Centric Approach to the Creation of a Law Firm Intranet. « The Law Firm Intranet