I really don’t like to go grocery shopping. There are a lot of things I don’t like about it, but especially trying to choose the “right” product. There are just too many choices. Do we really need this many types of mustard? Unmoderated usability testing tools used to fit nicely into the old general store. There were just a few tools, and the differences were obvious. Recently, unmoderated usability tools have begun to fill an entire American sized grocery store. In this article I would like to help you walk down some of the aisles of this store, and provide a little guidance in how to shop for the right unmoderated usability testing tool. Think of me as your “personal shopper”!
Important note: In the first version of this article we’ve mistakenly shared wrong information regarding the services that Webnographer provides. Webnograper is a service that carries out studies that range from 100 to over a 1.000 people, resulting in both qualitative and quantitative research data. It provides many different services.
Before we enter the grocery store, there are few ground rules I want to lay down. First, we are only shopping for unmoderated usability testing tools. This does not include remote moderated testing tools such as Morae, WebEX, or GoToMeeting that can used to run 1:1 usability sessions. We are also not shopping for online survey tools such as SurveyMonkey and SurveyGizmo. These are certainly helpful in collecting feedback from large numbers of participants, but there is typically no interaction with a product or design. We will be focusing on tools that researchers and designers can use to collect usability data from end users in an unmoderated way. These tools essentially involve setting up a study, launching it to a small or large number of participants, and then some conclusions might be drawn regarding the usability of a specific product or design. Before we enter the store, I apologize in advance.
There are some tools that I will miss or may misrepresent. The tools are changing everyday in terms of functionality and pricing structure. This is only a fuzzy snapshot in time. With this in mind, now let’s enter the store. As you can see from the “floor plan” below, there are two main sections of the store – quantitative and qualitative-based tools. Quantitative-based tools are designed to collect data from a large number of participants, with a focus on UX metrics such as task success, task completion times, click paths, abandonment rate, etc. These are the tools you need to run a benchmark usability study or compare subtle design treatments. The other side of the store is the qualitative-based tools. These tools are essentially a substitute for lab or remote-based usability testing. These tools emphasize collecting feedback from a small number of end users in a quick and dirty fashion. You may or may not be able to actually derive any metrics, but at the very least you will gain some insight into the most significant usability issues and hopefully make the right design decisions.
There are only two full-service tools I am aware of – Keynote’s WebEffective and Imperium’s Relevant View. Both of these tools provide you the flexibility you need in terms of setting up an online usability test and the data it can provide. You will have a tremendous flexibility in the study design including the use of conditional logic, blocking, and randomization. But, what sets these two tools apart from the other quantitative options is the support you receive. Experienced researchers will assist in designing the study, piloting/launching, and data analysis. Of course, this support does not come cheap; an individual study can easily be $10K and up. There is no set price; it all depends on the individual characteristics of your study and how much help you need. But, you will get peace of mind that the study was conducted in a highly professional way. You can just think of this as the “prepared food” section of the store.
These three self-service tools range in the degree of flexibility and analytical capabilities. UserZoom has many of the same set of functionality as the full-service options, but at a fraction of the cost. A typical usability test is about $3K, or $9K for an annual license. Loop11 is much more streamlined, in that it does not support functionality such as conditional logic or randomization. Loop11 is easy to use, very inexpensive ($350 per study), but it is very basic. If you need to collect various UX metrics, you want to visualize abandonment or click paths, or really do anything beyond tasks and simple questions, you should consider UserZoom. Bottom line though, and of these three tools will help you collect solid usability metrics at a reasonable cost. They also offer technical support as well as access to customer panels.
These online tools allow researchers to create or validate the information architecture for any website. The folks from Optimal Workshop have created two valuable tools. One tool, OptimalSort, is a cardsorting application, in that it allows participants to sort items into groups, and provide labels for those groups. It then analyzes the data from all the participants, and determines the best groupings for the items. This is sometimes referred to as an open card sort. The Treejack application allows the researcher to validate an information architecture. Participants are given a set of tasks, and then asked to select the correct location for each task. Task success and time are measured as a way to evaluate and validate the IA. This is sometimes referred to as a closed card sort. WebSort offers very similar functionality as OptimalSort. The pricing for all three products is very reasonable, with single studies running about $100.
For researchers on a very tight budget, or who have very simple needs might want to consider a DIY approach. All the details can be found in our book, Beyond the Usability Lab: Conducting Large-scale Online User Experience Studies (Morgan Kauffman/Elsevier, 2010). The basic idea is that the researcher will use a little bit of java script and HTML to create a simple web page that introduces the study and launches two separate browser windows: One browser will contain the website being evaluated, and the other browser will contain the survey itself (usually very small, above or below the website browser). The participant will move between each browser as they go through the study. While data are not passed across the two browsers, you can still capture basic data such as task success, completion times, and self-reported metrics.
Video tools are roughly equivalent to having a lab study on autopilot. They are quick and easy to set up, and are inexpensive. The researcher simply provides a set of tasks and a URL, and identifies the targeted participants. Participants are automatically recruited who fit the profile. The participant is carrying out the tasks, while their behaviors and verbal comments are being captured by a webcam. Once the study is finished, the researcher can view a video of all the sessions and distill some of the high level issues. There is little, if any opportunity to collect any data from these sessions. However, they may work well for the researcher who needs a quick sanity check of a design. The cost is very reasonable, usually less than $20 per participant.
These tools provide researchers with a detailed report from a user session. Similar to the video tools, the researcher sets up a study by identifying tasks, but they also include a set of open ended questions that the participants answer as they interact with the website. Participants are recruited based on select criteria and participate in the study. The output is a detailed report of each answer by each participant. This is a very easy way to get some high level feedback on a design. While it might take some work to identify the significant usability issues, it is very easy to use the quotes to compliment an in-person study. The cost is also very reasonable, typically less than $20 per participant.
Click and Mouse Tools
Click and mouse tools provide the researcher click and mouse movement data on a website. Unlike the other tools that capture comments (video or written), these tools focus on how users move throughout a website. There are some very nice visualizations that come out of these tools, such as click maps, attention maps, mouse movements, and scroll maps. These tools are particularly effective at quickly capturing data about what is drawing user’s attention. For example, fivesecondtest.com is very simple in that you can show static images of web pages to participants for five second and participants click on those features that initially grab their attention. Of these tools, ClickTale seems to offer the most functionality, including metrics such as abandonment rates as well as video recordings. The cost for these tools is very reasonable, usually less than $20 per participant or a monthly license for a few hundred dollars or less.
Combination tools collect a lot of data about the user experience. Some of the data are qualitative such as videos of the sessions and verbatims. Other data are quantitative such as clicks, task success, keystrokes, pages visited, and completion times. Unlike the self-service tools, these tools are typically used for smaller sample sizes (n<20). They provide a nice solution to the researcher who wants to gain a more complete picture of the user experience with a small number of participants. Pricing is quite affordable, typically a few hundred dollars or less for a single study.