Handbook of Usability Testing - Jeffrey Rubin - E-Book

Handbook of Usability Testing E-Book

Jeffrey Rubin

0,0
39,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Whether it's software, a cell phone, or a refrigerator, your customer wants - no, expects - your product to be easy to use. This fully revised handbook provides clear, step-by-step guidelines to help you test your product for usability. Completely updated with current industry best practices, it can give you that all-important marketplace advantage: products that perform the way users expect. You'll learn to recognize factors that limit usability, decide where testing should occur, set up a test plan to assess goals for your product's usability, and more.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 608

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title Page

Copyright

Dedication

About the Authors

Credits

Acknowledgments

Foreword

Preface to the Second Edition

Part I: Usability Testing: An Overview

Chapter 1: What Makes Something Usable?

What Do We Mean by “Usable”?

What Makes Something Less Usable?

What Makes Products More Usable?

What Are Techniques for Building in Usability?

Chapter 2: What Is Usability Testing?

Why Test? Goals of Testing

Basics of the Methodology

Chapter 3: When Should You Test?

Our Types of Tests: An Overview

Exploratory or Formative Study

Assessment or Summative Test

Validation or Verification Test

Comparison Test

Iterative Testing: Test Types through the Lifecycle

Chapter 4: Skills for Test Moderators

Characteristics of a Good Test Moderator

Getting the Most out of Your Participants

Troubleshooting Typical Moderating Problems

How to Improve Your Session-Moderating Skills

Part II: The Process for Conducting a Test

Chapter 5: Develop the Test Plan

Why Create a Test Plan?

The Parts of a Test Plan

Sample Test Plan

Chapter 6: Set Up a Testing Environment

Decide on a Location and Space

Recommended Testing Environment: Minimalist Portable Lab

Gather and Check Equipment, Artifacts, and Tools

Identify Co-Researchers, Assistants, and Observers

Chapter 7: Find and Select Participants

Characterize Users

Define the Criteria for Each User Group

Determine the Number of Participants to Test

Write the Screening Questionnaire

Find Sources of Participants

Screen and Select Participants

Schedule and Confirm Participants

Compensate Participants

Protect Participants' Privacy and Personal Information

Chapter 8: Prepare Test Materials

Guidelines for Observers

Orientation Script

Background Questionnaire

Data Collection Tools

Nondisclosures, Consent Forms, and Recording Waivers

Pre-Test Questionnaires and Interviews

Prototypes or Products to Test

Task Scenarios

Optional Training Materials

Post-Test Questionnaire

Common Question Formats

Debriefing Guide

Chapter 9: Conduct the Test Sessions

Guidelines for Moderating Test Sessions

Checklists for Getting Ready

When to Intervene

What Not to Say to Participants

Chapter 10: Debrief the Participant and Observers

Why Review with Participants and Observers?

Techniques for Reviewing with Participants

Where to Hold the Participant Debriefing Session

Basic Debriefing Guidelines

Advanced Debriefing Guidelines and Techniques

Reviewing and Reaching Consensus with Observers

Chapter 11: Analyze Data and Observations

Compile Data

Summarize Data

Analyze Data

Chapter 12: Report Findings and Recommendations

What Is a Finding?

Shape the Findings

Draft the Report

Develop Recommendations

Refine the Report Format

Create a Highlights Video or Presentation

Part III: Advanced Techniques

Chapter 13: Variations on the Basic Method

Who? Testing with Special Populations

What: Prototypes versus Real Products

How? Techniques for Monitored Tests

Where? Testing Outside a Lab

Self-Reporting (Surveys, Diary Studies)

Chapter 14: Expanding from Usability Testing to Designing the User Experience

Stealth Mode: Establish Value

Build on Successes

Formalize Processes and Practices

Expand UCD throughout the Organization

Afterword

Index

Handbook of Usability Testing, Second Edition: How to Plan, Design, and Conduct Effective Tests

Published by

Wiley Publishing, Inc.

10475 Crosspoint Boulevard

Indianapolis, IN 46256

Copyright © 2008 by Wiley Publishing, Inc., Indianapolis, Indiana

Published simultaneously in Canada

ISBN: 978-0-470-18548-3

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United StatesCopyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read.

For general information on our other products and services or to obtain technical support, please contact our Customer Care Department within the U.S. at (800) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002.

Library of Congress Cataloging-in-Publication Data is available from the publisher.

Trademarks: Wiley, the Wiley logo, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. All other trademarks are the property of their respective owners. Wiley Publishing, Inc. is not associated with any product or vendor mentioned in this book.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Dedicated to those for whom usability and user-centered design is a way of life and their work a joyful expression of their genuine concern for others.

—Jeff

To my parents, Jan and Duane Chisnell, who believe me when I tell them that I am working for world peace through user research and usability testing.

—Dana

About the Authors

Jeff Rubin has more than 30 years experience as a human factors/usability specialist in the technology arena. While at the Bell Laboratories' Human Performance Technology Center, he developed and refined testing methodologies, and conducted research on the usability criteria of software, documentation, and training materials.

During his career, Jeff has provided consulting services and workshops on the planning, design, and evaluation of computer-based products and services for hundreds of companies including Hewlett Packard, Citigroup, Texas Instruments, AT&T, the Ford Motor Company, FedEx, Arbitron, Sprint, and State Farm. He was cofounder and managing partner of The Usability Group from 1999–2005, a leading usability consulting firm that offered user-centered design and technology adoption strategies. Jeff served on the Board of the Usability Professionals Association from 1999–2001.

Jeff holds a degree in Experimental Psychology from Lehigh University. His extensive experience in the application of user-centered design principles to customer research, along with his ability to communicate complex principles and techniques in nontechnical language, make him especially qualified to write on the subject of usability testing.

He is currently retired from usability consulting and pursuing other passionate interests in the nonprofit sector.

Dana Chisnell is an independent usability consultant and user researcher operating UsabilityWorks in San Francisco, CA. She has been doing usability research, user interface design, and technical communications consulting and development since 1982.

Dana took part in her first usability test in 1983, while she was working as a research assistant at the Document Design Center. It was on a mainframe office system developed by IBM. She was still very wet behind the ears. Since then, she has worked with hundreds of study participants for dozens of clients to learn about design issues in software, hardware, web sites, online services, games, and ballots (and probably other things that are better forgotten about). She has helped companies like Yahoo!, Intuit, AARP, Wells Fargo, E*TRADE, Sun Microsystems, and RLG (now OCLC) perform usability tests and other user research to inform and improve the designs of their products and services.

Dana's colleagues consider her an expert in usability issues for older adults and plain language. (She says she's still learning.) Lately, she has been working on issues related to ballot design and usability and accessibility in voting.

She has a bachelor's degree in English from Michigan State University. She lives in the best neighborhood in the best city in the world.

Credits

Executive Editor

Bob Elliott

Development Editor

Maureen Spears

Technical Editor

Janice James

Production Editor

Eric Charbonneau

Copy Editor

Foxxe Editorial Services

Editorial Manager

Mary Beth Wakefield

Production Manager

Tim Tate

Vice President and Executive Group Publisher

Richard Swadley

Vice President and Executive Publisher

Joseph B. Wikert

Project Coordinator, Cover

Lynsey Stanford

Proofreader

Nancy Bell

Indexer

Jack Lewis

Cover Image

Getty Images/Photodisc/McMillan Digital Art

Acknowledgments

From Jeff Rubin

From the first edition, I would like to acknowledge:

Dean Vitello and Roberta Cross, who edited the entire first manuscript.Michele Baliestero, administrative assistant extraordinaire.John Wilkinson, who reviewed the original outline and several chapters of the manuscript.Pamela Adams, who reviewed the original outline and most of the manuscript, and with whom I worked on several usability projects.Terri Hudson from Wiley, who initially suggested I write a book on this topic.Ellen Mason, who brought me into Hewlett Packard to implement a user-centered design initiative and allowed me to try out new research protocols.

For this second edition, I would like to acknowledge:

Dave Rinehart, my partner in crime at The Usability Group, and co-developer of many user research strategies.The staff of The Usability Group, especially to Ann Wanschura, who was always loyal and kind, and who never met a screener questionnaire she could not master.Last, thanks to all the clients down through the years who showed confidence and trust in me and my colleagues to do the right thing for their customers.

From Dana Chisnell

The obvious person to thank first is Jeff Rubin. Jeff wrote Handbook of Usability Testing, one of the seminal books about usability testing, at a time when it was very unusual for companies to invest resources in performing a reality check on the usability of their products. The first edition had staying power. It became such a classic that apparently people want more. For better or worse, the world still needs books about usability testing. So, a thousand thank-yous to Jeff for writing the first edition, which helped many of us get started with usability testing over the last 14 years. Thanks, too, Jeff, for inviting me to work with you on the second edition. I am truly honored. And thank you for offering your patience, diligence, humor, and great wisdom to me and to the project of updating the Handbook.

Ginny Redish and Joe Dumas deserve great thanks as well. Their book, A Practical Guide to Usability Testing, which came out at the same time as Jeff's book, formed my approach to usability testing. Ginny has been my mentor for several years. In some weird twist of fate, it was Ginny who suggested me to Jeff. The circle is complete.

A lot of people will be thankful that this edition is done, none of them more than I. But Janice James probably comes a close second. Her excellent technical review of every last word of the second edition kept Jeff and me honest on the methodology and the modern realities of conducting usability tests. She inspired dozens of important updates and expansions in this edition.

So did friends and colleagues who gave us feedback on the first edition to inform the new one. JoAnn Hackos, Linda Urban, and Susan Becker all gave detailed comments about where they felt the usability world had changed, what their students had said would be more helpful, and insights about what they might do differently if it were their book.

Arnold Arcolio, who also gave extensive, specific comments before the revising started, generously spot-checked and re-reviewed drafts as the new edition took form.

Sandra Olson deserves thanks for helping me to develop a basic philosophy about how to recruit participants for user research and usability studies. Her excellent work as a recruiting consultant and her close review informed much that is new about recruiting in this book.

Ken Kellogg, Neil Fitzgerald, Christy Wells, and Tim Kiernan helped me understand what it takes to implement programs within companies that include usability testing and that attend closely to their users' experiences.

Other colleagues have been generous with stories, sources, answers to random questions, and examples (which you will see sprinkled throughout the book), as well. Chief among them are my former workmates at Tec-Ed, especially Stephanie Rosenbaum, Laurie Kantner, and Lori Anschuetz.

Jared Spool of UIE has also been encouraging and supportive throughout, starting with thorough, thoughtful feedback about the first edition and continuing through liberal permissions to include techniques and examples from his company's research practice in the second edition.

Thanks also go to those I've learned from over the years who are part of the larger user experience and usability community, including some I have never met face to face but know through online discussions, papers, articles, reports, and books.

To the clients and companies I have worked with over 25 years, as well as the hundreds of study participants, I also owe thanks. Some of the examples and stories here reflect composites of my experiences with all of those important people.

Thanks also go to Bob Elliott at Wiley for contacting Jeff about reviving the Handbook in the first place, and Maureen Spears for managing the “developmental” edit of a time-tested resource with humor, flexibility, and understanding.

Finally, I thank my friends and family for nodding politely and pouring me a drink when I might have gone over the top on some point of usability esoterica (to them) at the dinner table. My parents, Jan and Duane Chisnell, and Doris Ditner deserve special thanks for giving me time and space so I could hole up and write.

Foreword

Hey! I know you!

Well, I don't know you personally, but I know the type of person you are. After all, I'm a trained observer and I've already observed a few things.

First off, I observed that you're the type of person who likes to read a quality book. And, while you might appreciate a book about a dashing anthropology professor who discovers a mysterious code in the back of an ancient script that leads him on a globetrotting adventure that endangers his family and starts to topple the world's secret power brokers, you've chosen to pick up a book called Handbook of Usability Testing, Second Edition. I'm betting you're going to enjoy it just as much. (Sorry, there is no secret code hidden in these pages—that I've found—and I've read it four times so far.)

You're also the type of person who wonders how frustrating and hard to use products become that way. I'm also betting that you're a person who would really like to help your organization produce designs that delight its customers and users.

How do I know all these things? Because, well, I'm just like you; and I have been for almost 30 years. I conducted my first usability test in 1981. I was testing one of the world's first word processors, which my team had developed. We'd been working on the design for a while, growing increasingly uncomfortable with how complex it had become. Our fear was that we'd created a design that nobody would figure out.

In one of the first tests of its kind, we'd sat a handful of users down in front of our prototype, asked each to create new documents, make changes, save the files, and print them out. While we had our hunches about the design confirmed (even the simplest commands were hard to use), we felt exhilarated by the amazing feedback we'd gotten directly from the folks who would be using our design. We returned to our offices, changed the design, and couldn't wait to put the revised versions in front of the next batch of folks.

Since those early days, I've conducted hundreds of similar tests. (Actually, it's been more than a thousand, but who's counting?) I still find each test as fascinating and exhilarating as those first word processor evaluations. I still learn something new every time, something (I could have never predicted) that, now that we know it, will greatly improve the design. That's the beauty of usability tests—they're never boring.

Many test sessions stand out in my mind. There was the one where the VP of finance jumped out of his chair, having come across a system prompt asking him to “Hit Enter to Default”, shouting “I've never defaulted on anything before, I'm not going to start now.” There was the session where each of the users looked quizzically at the icon depicting a blood-dripping hatchet, exclaiming how cool it looked but not guessing it meant “Execute Program”. There was the one where the CEO of one of the world's largest consumer products companies, while evaluating an information system created specifically for him, turned and apologized to me, the session moderator, for ruining my test—because he couldn't figure out the design for even the simplest tasks. I could go on for hours. (Buy me a drink and I just might!)

Why are usability tests so fascinating? I think it's because you get to see the design through the user's eyes. They bring something into the foreground that no amount of discussion or debate would ever discover. And, even more exciting, is when a participant turns to you and says, “I love this—can I buy it right now?”

Years ago, the research company I work for, User Interface Engineering, conducted a study to understand where usability problems originate. We looked at dozens of large projects, traipsing through the myriad binders of internal documentation, looking to identify at what point usability problems we'd discovered had been introduced into the design. We were looking to see if we could catalogue the different ways teams create problems, so maybe they could create internal processes and mechanisms to avoid them going forward.

Despite our attempts, we realized such a catalogue would be impossible, not because there were too many causes, but because there were too few. In fact, there was only one cause. Every one of the hundreds of usability problems we were tracking was caused by the same exact problem: someone on the design team was missing a key piece of information when they were faced with an important design decision. Because they didn't have what they needed, they'd taken a guess and the usability problem was born. Had they had the info, they would've made a different, more informed choice, likely preventing the issue.

So, as fun and entertaining as usability testing is, we can't forget its core purpose: to help the design team make informed decisions. That's why the amazing work that Jeff and Dana have put into this book is so important. They've done a great job of collecting and organizing the essential techniques and tricks for conducting effective tests.

When the first edition of this book came out in 1994, I was thrilled. It was the first time anyone had gathered the techniques into one place, giving all of us a single resource to learn from and share with our colleagues. At UIE, it was our bible and we gave hundreds of copies to our clients, so they'd have the resource at their fingertips.

I'm even more thrilled with this new edition. We've learned a ton since ’94 on how to help teams improve their designs and Dana and Jeff have captured all of it nicely. You'll probably get tired of hearing me recommend this book all the time.

So, read on. Learn how to conduct great usability tests that will inform your team and provide what they need to create a delightful design. And, look forward to the excitement you'll experience when a participant turns to you and tells you just how much they love your design.

—Jared M. Spool, Founding Principal, User Interface Engineering

P.S. I think there's a hint to the secret code on page 114. It's down toward the bottom. Don't tell anyone else.

Preface to the Second Edition

Welcome to the revised, improved second edition of Handbook of Usability Testing. It has been 14 long years since this book first went to press, and I'd like to thank all the readers who have made the Handbook so successful, and especially those who communicated their congratulations with kind words.

In the time since the first edition went to press, much in the world of usability testing has changed dramatically. For example, “usability,” “user experience,” and “customer experience,” arcane terms at best back then, have become rather commonplace terms in reviews and marketing literature for new products. Other notable changes in the world include the Internet explosion, (in its infancy in ’94) the transportability and miniaturization of testing equipment, (lab in a bag anyone?), the myriad methods of data collection such as remote, automated, and digitized, and the ever-shrinking life cycle for introducing new technological products and services. Suffice it to say, usability testing has gone mainstream and is no longer just the province of specialists. For all these reasons and more, a second edition was necessary and, dare I say, long overdue.

The most significant change in this edition is that there are now two authors, where previously, I was the sole author. Let me explain why. I have essentially retired from usability consulting for health reasons after 30 plus years. When our publisher, Wiley, indicated an interest in updating the book, I knew it was beyond my capabilities alone, yet I did want the book to continue its legacy of helping readers improve the usability of their products and services. So I suggested to Wiley that I recruit a skilled coauthor (if it was possible to find one who was interested and shared my sensibilities for the discipline) to do the heavy lifting on the second edition. It was my good fortune to connect with Dana Chisnell, and she has done a superlative job, beyond my considerable expectations, of researching, writing, updating, refreshing, and improving the Handbook. She has been a joy to work with, and I couldn't have asked for a better partner and usability professional to pass the torch to, and to carry the Handbook forward for the next generation of readers.

In this edition, Dana and I have endeavored to retain the timeless principles of usability testing, while revising those elements of the book that are clearly dated, or that can benefit from improved methods and techniques. You will find hundreds of additions and revisions such as:

Reordering of the main sections (see below).Reorganization of many chapters to align them more closely to the flow of conducting a test.Improved layout, format, and typography.Updating of many of the examples and samples that preceded the ascendancy of the Internet.Improved drawings.The creation of an ancillary web site, www.wiley.com/go/usabilitytesting, which contains supplemental materials such as: Updated references.Books, blogs, podcasts, and other resources.Electronic versions of the deliverables used as examples in the book.More examples of test designs and, over time, other deliverables contributed by the authors and others who aspire to share their work.

Regarding the reordering of the main sections, we have simplified into three parts the material that previously was spread among four sections. We now have:

Part 1: Overview of Testing, which covers the definition of key terms and presents an expanded discussion of user-centered design and other usability techniques, and explains the basics of moderating a test.Part 2: Basic Process of Testing, which covers the how-to of testing in step-by-step fashion.Part 3: Advanced Techniques, which covers the who?, what?, where?, and how? of variations on the basic method, and also discusses how to extend one's influence on the whole of product development strategy.

What hasn't changed is the rationale for this book altogether. With the demand for usable products far outpacing the number of trained professionals available to provide assistance, many product developers, engineers, system designers, technical communicators, and marketing and training specialists have had to assume primary responsibility for usability within their organizations. With little formal training in usability engineering or user-centered design, many are being asked to perform tasks for which they are unprepared.

This book is intended to help bridge this gap in knowledge and training by providing a straightforward, step-by-step approach for evaluating and improving the usability of technology-based products, systems, and their accompanying support materials. It is a ”how-to” book, filled with practical guidelines, realistic examples, and many samples of test materials.

But it is also intended for a secondary audience of the more experienced human factors or usability specialist who may be new to the discipline of usability testing, including:

Human factors specialistsManagers of product and system development teamsProduct marketing specialistsSoftware and hardware engineersSystem designers and programmersTechnical communicatorsTraining specialists

A third audience is college and university students in the disciplines of computer science, technical communication, industrial engineering, experimental and cognitive psychology, and human factors engineering, who wish to learn a pragmatic, no-nonsense approach to designing usable products.

In order to communicate clearly with these audiences, we have used plain language, and have kept the references to formulas and statistics to a bare minimum. While many of the principles and guidelines are based on theoretical and practitioner research, the vast majority have been drawn from Dana's and my combined 55 years of experience as usability specialists designing, evaluating, and testing all manner of software, hardware, and written materials. Wherever possible, we have tried to offer explanations for the methods presented herein, so that you, the reader, might avoid the pitfalls and political landmines that we have discovered only through substantial trial and error. For those readers who would like to dig deeper, we have included references to other publications and articles that influenced our thinking at www.wiley.com/go/usabilitytesting.

Caveat

In writing this book, we have placed tremendous trust in the reader to acknowledge his or her own capabilities and limitations as they pertain to user-centered design and to stay within them. Be realistic about your own level of knowledge and expertise, even if management anoints you as the resident usability expert. Start slowly with small, simple studies, allowing yourself time to acquire the necessary experience and confidence to expand further. Above all, remember that the essence of user-centered design is clear (unbiased) seeing, appreciation of detail, and trust in the ability of your future customers to guide your hand, if you will only let them.

—Jeff Rubin

Part I

Usability Testing: An Overview

Chapter 1: What Makes Something Usable?

Chapter 2: What Is Usability Testing?

Chapter 3: When Should You Test?

Chapter 4: Skills for Test Moderators

Chapter 1

What Makes Something Usable?

What makes a product or service usable?

Usability is a quality that many products possess, but many, many more lack. There are historical, cultural, organizational, monetary, and other reasons for this, which are beyond the scope of this book. Fortunately, however, there are customary and reliable methods for assessing where design contributes to usability and where it does not, and for judging what changes to make to designs so a product can be usable enough to survive or even thrive in the marketplace.

It can seem hard to know what makes something usable because unless you have a breakthrough usability paradigm that actually drives sales (Apple's iPod comes to mind), usability is only an issue when it is lacking or absent. Imagine a customer trying to buy something from your company's e-commerce web site. The inner dialogue they may be having with the site might sound like this: I can't find what I'm looking for. Okay, I have found what I'm looking for, but I can't tell how much it costs. Is it in stock? Can it be shipped to where I need it to go? Is shipping free if I spend this much? Nearly everyone who has ever tried to purchase something on a web site has encountered issues like these.

It is easy to pick on web sites (after all there are so very many of them), but there are myriad other situations where people encounter products and services that are difficult to use every day. Do you know how to use all of the features on your alarm clock, phone, or DVR? When you contact a vendor, how easy is it to know what to choose in their voice-based menu of options?

What Do We Mean by “Usable”?

In large part, what makes something usable is the absence of frustration in using it. As we lay out the process and method for conducting usability testing in this book, we will rely on this definition of “usability;” when a product or service is truly usable, the user can do what he or she wants to do the way he or she expects to be able to do it, without hindrance, hesitation, or questions.

But before we get into defining and exploring usability testing, let's talk a bit more about the concept of usability and its attributes. To be usable, a product or service should be useful, efficient, effective, satisfying, learnable, and accessible.

Usefulness concerns the degree to which a product enables a user to achieve his or her goals, and is an assessment of the user's willingness to use the product at all. Without that motivation, other measures make no sense, because the product will just sit on the shelf. If a system is easy to use, easy to learn, and even satisfying to use, but does not achieve the specific goals of a specific user, it will not be used even if it is given away for free. Interestingly enough, usefulness is probably the element that is most often overlooked during experiments and studies in the lab.

In the early stages of product development, it is up to the marketing team to ascertain what product or system features are desirable and necessary before other elements of usability are even considered. Lacking that, the development team is hard-pressed to take the user's point of view and will simply guess or, even worse, use themselves as the user model. This is very often where a system-oriented design takes hold.

Efficiency is the quickness with which the user's goal can be accomplished accurately and completely and is usually a measure of time. For example, you might set a usability testing benchmark that says “95 percent of all users will be able to load the software within 10 minutes.”

Effectiveness refers to the extent to which the product behaves in the way that users expect it to and the ease with which users can use it to do what they intend. This is usually measured quantitatively with error rate. Your usability testing measure for effectiveness, like that for efficiency, should be tied to some percentage of total users. Extending the example from efficiency, the benchmark might be expressed as “95 percent of all users will be able to load the software correctly on the first attempt.”

Learnability is a part of effectiveness and has to do with the user's ability to operate the system to some defined level of competence after some predetermined amount and period of training (which may be no time at all). It can also refer to the ability of infrequent users to relearn the system after periods of inactivity.

Satisfaction refers to the user's perceptions, feelings, and opinions of the product, usually captured through both written and oral questioning. Users are more likely to perform well on a product that meets their needs and provides satisfaction than one that does not. Typically, users are asked to rate and rank products that they try, and this can often reveal causes and reasons for problems that occur.

Usability goals and objectives are typically defined in measurable terms of one or more of these attributes. However, let us caution that making a product usable is never simply the ability to generate numbers about usage and satisfaction. While the numbers can tell us whether a product “works” or not, there is a distinctive qualitative element to how usable something is as well, which is hard to capture with numbers and is difficult to pin down. It has to do with how one interprets the data in order to know how to fix a problem because the behavioral data tells you why there is a problem. Any doctor can measure a patient's vital signs, such as blood pressure and pulse rate. But interpreting those numbers and recommending the appropriate course of action for a specific patient is the true value of the physician. Judging the several possible alternative causes of a design problem, and knowing which are especially likely in a particular case, often means looking beyond individual data points in order to design effective treatment. There exist these little subtleties that evade the untrained eye.

Accessibility and usability are siblings. In the broadest sense, accessibility is about having access to the products needed to accomplish a goal. But in this book when we talk about accessibility, we are looking at what makes products usable by people who have disabilities. Making a product usable for people with disabilities—or who are in special contexts, or both—almost always benefits people who do not have disabilities. Considering accessibility for people with disabilities can clarify and simplify design for people who face temporary limitations (for example, injury) or situational ones (such as divided attention or bad environmental conditions, such as bright light or not enough light). There are many tools and sets of guidelines available to assist you in making accessible designs. (We include pointers to accessibility resources on the web site that accompanies this book (see www.wiley.com/go/usabilitytesting.com for more information.) You should acquaint yourself with accessibility best practices so that you can implement them in your organization's user-centered design process along with usability testing and other methods.

Making things more usable and accessible is part of the larger discipline of user-centered design (UCD), which encompasses a number of methods and techniques that we will talk about later in this chapter. In turn, user-centered design rolls up into an even larger, more holistic concept called experience design. Customers may be able to complete the purchase process on your web site, but how does that mesh with what happens when the product is delivered, maintained, serviced, and possibly returned? What does your organization do to support the research and decision-making process leading up to the purchase? All of these figure in to experience design.

Which brings us back to usability.

True usability is invisible. If something is going well, you don't notice it. If the temperature in a room is comfortable, no one complains. But usability in products happens along a continuum. How usable is your product? Could it be more usable even though users can accomplish their goals? Is it worth improving?

Most usability professionals spend most of their time working on eliminating design problems, trying to minimize frustration for users. This is a laudable goal! But know that it is a difficult one to attain for every user of your product. And it affects only a small part of the user's experience of accomplishing a goal. And, though there are quantitative approaches to testing the usability of products, it is impossible to measure the usability of something. You can only measure how unusable it is: how many problems people have using something, what the problems are and why.

By incorporating evaluation methods such as usability testing throughout an iterative design process, it is possible to make products and services that are useful and usable, and possibly even delightful.

What Makes Something Less Usable?

Why are so many high-tech products so hard to use?

In this section, we explore this question, discuss why the situation exists, and examine the overall antidote to this problem. Many of the examples in this book involve not only consumer hardware, software, and web sites but also documentation such as user's guides and embedded assistance such as on-screen instructions and error messages. The methods in this book also work for appliances such as music players, cell phones, and game consoles. Even products, such as the control panel for an ultrasound machine or the user manual for a digital camera, fall within the scope of this book.

Five Reasons Why Products Are Hard to Use

For those of you who currently work in the product development arena, as engineers, user-interface designers, technical communicators, training specialists, or managers in these disciplines, it seems likely that several of the reasons for the development of hard-to-use products and systems will sound painfully familiar.

Development focuses on the machine or system.Target audiences change and adapt.Designing usable products is difficult.Team specialists don't always work in integrated ways.Design and implementation don't always match.

Reason 1: Development Focuses on the Machine or System

During design and development of the product, the emphasis and focus may have been on the machine or system, not on the person who is the ultimate end user. The general model of human performance shown in Figure 1.1 helps to clarify this point.

Figure 1.1 Bailey's Human Performance Model

There are three major components to consider in any type of human performance situation as shown in Bailey's Human performance model.

The humanThe contextThe activity

Because the development of a system or product is an attempt to improve human performance in some area, designers should consider these three components during the design process. All three affect the final outcome of how well humans ultimately perform. Unfortunately, of these three components, designers, engineers, and programmers have traditionally placed the greatest emphasis on the activity component, and much less emphasis on the human and the context components. The relationship of the three components to each other has also been neglected. There are several explanations for this unbalanced approach:

There has been an underlying assumption that because humans are so inherently flexible and adaptable, it is easier to let them adapt themselves to the machine, rather than vice versa.Developers traditionally have been more comfortable working with the seemingly “black and white,” scientific, concrete issues associated with systems, than with the more gray, muddled, ambiguous issues associated with human beings.Developers have historically been hired and rewarded not for their interpersonal, “people” skills but for their ability to solve technical problems.The most important factor leading to the neglect of human needs has been that in the past, designers were developing products for end users who were much like themselves. There was simply no reason to study such a familiar colleague. That leads us to the next point.

Reason 2: Target Audiences Expand and Adapt

As technology has penetrated the mainstream consumer market, the target audience has expanded and continues to change dramatically. Development organizations have been slow to react to this evolution.

The original users of computer-based products were enthusiasts (also known as early adopters) possessing expert knowledge of computers and mechanical devices, a love of technology, the desire to tinker, and pride in their ability to troubleshoot and repair any problem. Developers of these products shared similar characteristics. In essence, users and developers of these systems were one and the same. Because of this similarity, the developers practiced “next-bench” design, a method of designing for the user who is literally sitting one bench away in the development lab. Not surprisingly, this approach met with relative success, and users rarely if ever complained about difficulties.

Why would they complain? Much of their joy in using the product was the amount of tinkering and fiddling required to make it work, and enthusiast users took immense pride in their abilities to make these complicated products function. Consequently, a “machine-oriented” or “system-oriented” approach met with little resistance and became the development norm.

Today, however, all that has changed dramatically. Users are apt to have little technical knowledge of computers and mechanical devices, little patience for tinkering with the product just purchased, and completely different expectations from those of the designer. More important, today's user is not even remotely comparable to the designer in skill set, aptitude, expectation, or almost any attribute that is relevant to the design process. Where in the past, companies might have found Ph.D. chemists using their products, today they will find high-school graduates performing similar functions. Obviously, “next-bench” design simply falls apart as a workable design strategy when there is a great discrepancy between user and designer, and companies employing such a strategy, even inadvertently, will continue to produce hard-to-use products.

Designers aren't hobbyist enthusiasts (necessarily) anymore; most are trained professionals educated in human computer interaction, industrial design, human factors engineering, or computer science, or a combination of these. Whereas before it was unusual for a nontechnical person to use electronic or computer-based equipment, today it is almost impossible for the average person not to use such a product in either the workplace or in private life. The overwhelming majority of products, whether in the workplace or the home, be they cell phones, DVRs, web sites, or sophisticated testing equipment, are intended for this less technical user. Today's user wants a tool, not another hobby.

Reason 3: Designing Usable Products Is Difficult

The design of usable systems is a difficult, unpredictable endeavor, yet many organizations treat it as if it were just “common sense.”

While much has been written about what makes something usable, the concept remains maddeningly elusive, especially for those without a background in either the behavioral or social sciences. Part art, part science, it seems that everyone has an opinion about usability, and how to achieve it—that is, until it is time to evaluate the usability of a product (which requires an operational definition and precise measurement).

This trivializing of usability creates a more dangerous situation than if product designers freely admitted that designing for usability was not their area of expertise and began to look for alternative ways of developing products. Or as Will Rogers so aptly stated “It's not the things that we don't know that gets us into trouble; it's the things we do know that ain't so.” In many organizations usability engineering has been approached as if it were nothing more than “common sense.”

When this book was first published in 1994, few systems designers and developers had knowledge of the basic principles of user-centered design. Today, most designers have some knowledge of—or at least exposure to—user-centered design practices, whether they are aware of them or not. However, there are still gaps between awareness and execution. Usability principles are still not obvious, and there is still a great need for education, assistance, and a systematic approach in applying so-called “common sense” to the design process.

Reason 4: Team Specialists Don't Always Work in Integrated Ways

Organizations employ very specialized teams and approaches to product and system development, yet fail to integrate them with each other.

To improve efficiency, many organizations have broken down the product development process into separate system components developed independently. For example, components of a software product include the user interface, the help system, and the written materials. Typically, these components are developed by separate individuals or teams. Now, there is nothing inherently wrong with specialization. The difficulty arises when there is little integration of these separate components and poor communication among the different development teams.

Often the product development proceeds in separate, compartmentalized sections. To an outsider looking on, the development would be seen as depicted in Figure 1.2.

Figure 1.2 Nonintegrated approach to product development

Each development group functions independently, almost as a silo, and the final product often reflects this approach. The help center will not adequately support the user interface or it will be organized very differently from the interface. Or user documentation and help will be redundant with little cross-referencing. Or the documentation will not reflect the latest version of the user interface. You get the picture.

The problem occurs when the product is released. The end user, upon receiving this new product, views it and expects it to work as a single, integrated product, as shown in Figure 1.3. He or she makes no particular distinction among the three components, and each one is expected to support and work seamlessly with the others. When the product does not work in this way, it clashes with the user's expectations, and whatever advantages accrue through specialization are lost.

Figure 1.3 Integrated approach to product development

Even more interesting is how often organizations unknowingly exacerbate this lack of integration by usability testing each of the components separately. Documentation is tested separately from the interface, and the interface separately from the help. Ultimately, this approach is futile, because it matters little if each component is usable within itself. Only if the components work well together will the product be viewed as usable and meeting the user's needs.

Fortunately, there have been advances in application development methodologies in recent years that emphasize iterated design and interdisciplinary teams. Plus there are great examples of cutting-edge products and services built around usability advantages that are dominating their markets, such as Netflix, eBay, Yahoo!, and the iPod and iPhone, as well as Whirlpool's latest line of home appliances. Their integration of components is a key contributor to their success.

Reason 5: Design and Implementation Don't Always Match

The design of the user interface and the technical implementation of the user interface are different activities, requiring very different skills. Today, the emphasis and need are on design skills, while many engineers possess the mind-set and skill set for technical implementation.

Design, in this case, relates to how the product communicates, whereas implementation refers to how it works. Previously, this dichotomy between design and implementation was rarely even acknowledged. Engineers and designers were hired for their technical expertise (e.g., programming and machine-oriented analysis) rather than for their design expertise (e.g., communication and human-oriented analysis). This is understandable, because with early generation computer languages the great challenge lay in simply getting the product to work. If it communicated elegantly as well, so much the better, but that was not the prime directive.

With the advent of new-generation programming languages and tools to automatically develop program code, the challenge of technical implementation has diminished. The challenge of design, however, has increased dramatically due to the need to reach a broader, less sophisticated user population and the rising expectations for ease of use. To use a computer analogy, the focus has moved from the inside of the machine (how it works) to the outside where the end user resides (how it communicates).

This change in focus has altered the skills required of designers. This evolution toward design and away from implementation will continue. Someday, perhaps skills such as programming will be completely unnecessary when designing a user interface.

These five reasons merely brush the surface of how and why unusable products and systems continue to flourish. More important is the common theme among these problems and misperceptions; namely that too much emphasis has been placed on the product itself and too little on the desired effects the product needs to achieve. Especially in the heat of a development process that grows shorter and more frenetic all the time, it is not surprising that the user continues to receive too little attention and consideration.

It is easy for designers to lose touch with the fact that they are not designing products per se, but rather they are designing the relationship of product and human. Furthermore, in designing this relationship, designers must allow the human to focus on the task at hand—help the human attain a goal—not on the means with which to do that task. They are also designing the relationship of the various product components to each other. This implies excellent communication among the different entities designing the total product and those involved in the larger experience of using the product in a life or work context. What has been done in the past simply will not work for today's user and today's technologies.

What is needed are methods and techniques to help designers change the way they view and design products—methods that work from the outside in, from the end user's needs and abilities to the eventual implementation of the product is user-centered design (UCD). Because it is only within the context of UCD that usability testing makes sense and thrives, let's explore this notion of user-centered design in more detail.

What Makes Products More Usable?

User-centered design (UCD) describes an approach that has been around for decades under different names, such as human factors engineering, ergonomics, and usability engineering. (The terms human factors engineering and ergonomics are almost interchangeable, the major difference between the two having more to do with geography than with real differences in approach and implementation. In the United States, human factors engineering is the more widely used term, and in other countries, most notably in Europe, ergonomics is more widely used.) UCD represents the techniques, processes, methods, and procedures for designing usable products and systems, but just as important, it is the philosophy that places the user at the center of the process.

Although the design team must think about the technology of the product first (can we build what we have in mind?), and then what the features will be (will it do what we want it to do?), they must also think about what the user's experience will be like when he or she uses the product. In user-centered design, development starts with the user as the focus, taking into account the abilities and limitations of the underlying technology and the features the company has in mind to offer.

As a design process, UCD seeks to support how target users actually work, rather than forcing users to change what they do to use something. The International Organization for Standardization (ISO) in standard 13407 says that UCD is “characterized by: the active involvement of users and a clear understanding of user and task requirements; an appropriate allocation of function between users and technology; the iteration of design solutions; multidisciplinary design.”

Going beyond user-centered design of a product, we should be paying attention to the whole user experience in the entire cycle of user ownership of a product. Ideally, the entire process of interacting with potential customers, from the initial sales and marketing contact through the entire duration of ownership through the point at which another product is purchased or the current one upgraded, should also be included in a user-centered approach. In such a scenario, companies would extend their concern to include all prepurchase and postpurchase contacts and interactions. However, let's take one step at a time, and stick to the design process.

Numerous articles and books have been written on the subject of user-centered design (UCD) (for a list of our favorites, see the web site that accompanies this book, www.wiley.com/go/usabilitytesting.com). However, it is important for the reader to understand the basic principles of UCD in order to understand the context for performing usability testing. Usability testing is not UCD itself; it is merely one of several techniques for helping ensure a good, user-centered design.

We want to emphasize these basic principles of user-centered design:

Early focus on users and their tasksEvaluation and measurement of product usageIterated design

An Early Focus on Users and Tasks

More than just simply identifying and categorizing users, we recommend direct contact between users and the design team throughout the development lifecycle. Of course, your team needs training and coaching in how to manage these interactions. This is a responsibility that you can take on as you become more educated and practiced, yourself.

Though a goal should be to institutionalize customer contact, be wary of doing it merely to complete a check-off box on one's performance appraisal form. What is required is a systematic, structured approach to the collection of information from and about users. Designers require training from expert interviewers before conducting a data collection session. Otherwise, the results can be very misleading.

Evaluation and Measurement of Product Usage

Here, emphasis is placed on behavioral measurements of ease of learning and ease of use very early in the design process, through the development and testing of prototypes with actual users.

Iterative Design and Testing

Much has been made about the importance of design iteration. However, this is not just fine-tuning late in the development cycle. Rather, true iterative design allows for the complete overhaul and rethinking of a design, through early testing of conceptual models and design ideas. If designers are not prepared for such a major step, then the influence of iterative design becomes minimal and cosmetic. In essence, true iterative design allows one to “shape the product” through a process of design, test, redesign, and retest activities.

Attributes of Organizations That Practice UCD

User-centered design demands a rethinking of the way in which most companies do business, develop products, and think about their customers. While currently there exists no cookie-cutter formula for success, there are common attributes that companies practicing UCD share. For example:

Phases that include user inputMultidisciplinary teamsConcerned, enlightened managementA “learn as you go” perspectiveDefined usability goals and objectives

Phases That Include User Input

Unlike the typical phases we have all seen in traditional development methodologies, a user-centered approach is based on receiving user feedback or input during each phase, prior to moving to the next phase. This can involve a variety of techniques, usability testing being only one of these.

Today, most major companies that develop technology-based products or systems have product lifecycles that include some type of usability engineering/human factors process. In that process, questions arise. These questions and some suggested methods for answering them appear in Figure 1.4.

Figure 1.4 Questions and methods for answering them

Within each phase, there will be a variety of usability engineering activities. Note that, although this particular lifecycle is written from the viewpoint of the human factors specialist's activities, there are multiple places where collaboration is required among various team members. This leads to our next attribute of organizations practicing UCD.

A Multidisciplinary Team Approach

No longer can design be the province of one person or even of one specialty. While one designer may take ultimate responsibility for a product's design, he or she is not all-knowing about how to proceed. There are simply too many factors to consider when designing very complex products for less technical end users. User-centered design requires a variety of skills, knowledge, and, most importantly, information about the intended user and usage. Today, teams composed of specialists from many fields, such as engineering, marketing, training, user-interface design, human factors, and multimedia, are becoming the norm. In turn, many of these specialists have training in complementary areas, so cross-discipline work is easier and more dynamic than ever before.

Concerned, Enlightened Management

Typically, the degree to which usability is a true corporate concern is the degree to which a company's management is committed to following its own lifecycle and giving its guidelines teeth by holding the design team accountable. Management understands that there are financial benefits to usability and market share to be won.

A “Learn as You Go” Perspective

UCD is an evolutionary process whereby the final product is shaped over time. It requires designers to take the attitude that the optimum design is acquired through a process of trial and error, discovery, and refinement. Assumptions about how to proceed remain assumptions and are not cast in concrete until evaluated with the end user. The end user's performance and preferences are the final arbiters of design decisions.

Defined Usability Goals and Objectives

Designing a product to be useful must be a structured and systematic process, beginning with high-level goals and moving to specific objectives. You cannot achieve a goal—usability or otherwise—if it remains nebulous and ill-conceived. Even the term usability itself must be defined with your organization. An operational definition of what makes your product usable (tied to successful completion criteria, as we will talk about in Chapter 5) may include:

UsefulnessEfficiencyEffectivenessSatisfactionAccessibility

Thus bringing us full circle to our original description of what makes a product usable. Now let's review some of the major techniques and methods a usability specialist uses to ensure a user-centered design.

What Are Techniques for Building in Usability?

UCD comprises a variety of techniques, methods, and practices, each applied at different points in the product development lifecycle. Reviewing the major methods will help to provide some context for usability testing, which itself is one of these techniques. Please note that the order in which the techniques are described is more or less the order in which they would be employed during a product's development lifecycle.

Ethnographic Research

Ethnographic research borrows techniques from anthropology. It involves observing users in the place where they would normally use the product (e.g., work, home, coffee bar, etc.) to gather data about who your target users are, what tasks and goals they have related to your planned product (or enhancements), and the context in which they work to accomplish their goals. From this qualitative research, you can develop user profiles, personas (archetype users), scenarios, and task descriptions on which you and the design team can base design decisions throughout the development lifecycle.

Participatory Design

Less a technique and more an embodiment of UCD, participatory design employs one or more representative users on the design team itself. Often used for the development of in-house systems, this approach thrusts the end user into the heart of the design process from the very commencement of the project by tapping the user's knowledge, skill set, and even emotional reactions to the design. The potential danger is that the representative users can become too close to the design team. They begin to react and think like the others, or by virtue of their desire to avoid admonishing their colleagues, withhold important concerns or criticism.

A variation on this technique is to arrange short, individual workshops where users, designers, and developers work together on an aspect of design. For example, users, designers, and engineers using workable models, work together to determine the best size and shape for the product.

Focus Group Research

Use focus group research at the very early stages of a project to evaluate preliminary concepts with representative users. It can be considered part of “proof of concept” review. In some cases it is used to identify and confirm the characteristics of the representative user altogether. All focus group research employs the simultaneous involvement of more than one participant, a key factor in differentiating this approach from many other techniques.

The concepts that participants evaluate in these group sessions can be presented in the most preliminary form, such as paper-and-pencil drawings, storyboards, and/or more elaborate screen-based prototypes or plastic models. The objective is to identify how acceptable the concepts are, in what ways they are unacceptable or unsatisfactory, and how they might be made more acceptable and useful. The beauty of the focus group is its ability to explore a few people's judgments and feelings in great depth, and in so doing learn something about how end users think and feel. In this way, focus groups are very different from—and no substitute for—usability tests. A focus group is good for general, qualitative information but not for learning about performance issues and real behaviors. Remember, people in focus groups are reporting what they feel like telling you, which is almost always different from what they actually do. Usability tests are best for observing behaviors and measuring performance issues, while perhaps gathering some qualitative information along the way.

Surveys