Beyond the Usability Lab:

Conducting Large-Scale User Experience Studies

by Bill Albert, Tom Tullis, and Donna Tedesco

To be published January 29, 2010 by Elsevier/Morgan Kaufmann Publishers

This book is written for usability specialists, user experience researchers, market researchers, information architects, interaction designers, business analysts, and managers who are looking to learn about the capabilities of, and gain experience with, large-scale online usability studies. This book is a practical how-to guide for conducting large-scale online usability studies to improve the user experience of web sites and software.

Outline

1. Introduction

2. Planning Your Study

3. Designing Your Study

Chapter 3 is devoted to developing the study design. The first half of the chapter are the various sections that are typically included in an automated usability study. For each section, we will review best practices and common pitfalls. We want to give the reader the confidence for putting together an effective automated study. The last part of this chapter deals with common techniques that are used in various parts of a study. They include topics such as branching, navigation, speed traps, and question types.

4. Launching your study

Chapter 4 deals with issues around launching an automated study. This includes all the activities that happen after a study has been developed until the final data are available. This chapter discusses how to set up a pilot test and validate the study, timing a launch to maximize participation and quality results, and phased launches. The chapter concludes with a discussion of how to monitor results. This includes both participation rates as well as data quality.

5. Data preparation

Chapter 5 helps the reader prepare their data for the analysis stage. Some very important activities need to take place prior to data analysis to ensure valid results. Topics in this chapter include how to identify fraudulent participants, running consistency checks on the participant responses, and identifying outliers in the data that may need to be removed from the analysis. The chapter concludes with a brief discussion of how to recode variables that will be most useful in the analysis stage.

6. Data analysis and presentation

Chapter 6 covers all the information the reader will need to know about how to analyze and present data derived from an automated study. Each section of this chapter covers one type of data typically captured in an automated study. Verbatim analysis focuses on how to derive meaningful and reliable findings from open-ended responses. Task-based metrics include success, completion times, and ease of use ratings. Segmentation analysis includes ways to identify how distinct groups performed and reacted differently. Post session analysis involves looking at metrics such as SUS scores, overall satisfaction and expectations, and ease of use ratings. Behavioral data analysis includes metrics such as clicks paths, page views, and time spent on each page. Combining data from more than one metric is a very important step in analysis. Methods for identifying usability issues from all the data are described and examples given. This chapter is very practically oriented, giving step-by-step direction on how to perform each type of analysis. Many examples demonstrate different ways to present the results.

7. Building your own online study

Chapter 7 shows readers how to create relatively simple online studies themselves. Approaches to presenting tasks and prototypes are described as will techniques for collecting task success, times, and various kinds of self-reported data, including rating scales, open-ended questions, and the System Usability Scale (SUS). While some examples of HTML and JavaScript are shown, we describe them in such a way that even someone new to those technologies could understand and use them. Complete examples are shown that readers could easily adapt. Code samples will also be provided on a companion website.

8. Commercial online solutions

Chapter 8 reviews the common online tools that can be used for running automated tests. While the "Do-It-Yourself" reader may want to use the techniques described in Chapter 7, others may want to use a commercial tool like those described in this chapter. Most of the chapter is devoted to those tools used most often to collect behavioral data such as Keynote, RelevantView, and UserZoom. There is also a discussion of online tools that do not collect performance data such as SurveyMonkey, ForeSee Results, and OpinionLab. Comparisons of the tools, including what kinds of data can be collected with each, are included. The chapter concludes with a brief discussion of other possible solutions such as agencies that specialize in automated testing. Readers will also be referred to our companion website to keep up with updates and emerging software solutions.

9. Case Studies

Chapter 9 includes 5-7 short case studies of different types of online usability studies. These are being written by other contributors.

10. Ten tips for a successful automated study

Chapter 10 provides a summary of some of the key points made throughout the book. This summary is in the form of the top ten tips that someone should know when conducting their own automated study. These tips are very practical in nature.


Measuring UX Homepage