Summary: It's more difficult to conduct usability studies with experienced users than with novices, and the improvements are usually smaller. Still, improving expert performance is often worth the effort.
Alice Davey sent me this question:
I've been to a couple of the courses including "Application design", which were very good. I am currently designing an XXX application which users will quickly become "expert" in as they will use it frequently.
I want to do usability testing from an early stage, but what's stumped me is how to test whether the application will serve the "expert" user well. By definition, all my test participants will be novice….
This issue is becoming increasingly important as the balance between novice and expert users tilts in the direction of the experts.
However, before we discuss usability for expert users, let's remember that nobody becomes an expert without having been a novice first. You can't forget about usability for new users, except in those rare cases where your user population is fixed, and you don't expect to sell any more copies of your product or hire any new staff to replace or supplement your existing people.
On the Web, the initial user experience is especially important: people — with ten years' experience using other sites — will be novices with respect to your site the first time they click through from a search engine. Unless your site meets their expectations and can be understood immediately, they'll beat a fast retreat back to the sites they already know.
Expert User Testing: The Basics
The basics of testing experts are the same as any other user testing:
- Recruit representative users
- Give them realistic tasks
- Ask them to think out loud (while you shut up and avoid biasing their behavior with untimely hints)
Also, it's usually best to first test with a handful of users and then iterate the design before the next round of testing. You should conduct these small studies as early as possible in the design process, using low-fidelity design prototypes. Paper prototyping might work even better with expert users than with novices, because the experts are used to performing the test tasks. They can thus focus even more on the problem at hand, as opposed to, say, whether a dialog box is presented on an index card or as a rectangle on the screen.
One difference from testing with novices is that the "realistic tasks" are obviously more advanced for expert users. You can have them dig deeper and solve larger and more difficult problems.
A second difference might not be as obvious: We almost always ask users to verbalize their thoughts in a running monologue as they use a design. This think-aloud process tells you how people interpret the design elements, whether any are confusing, and which ones are compelling or repelling. Although the test situation is a bit artificial, a good facilitator and engaging tasks can make users suspend disbelief and thus tell you the unvarnished truth.
Expert users, however, cause difficulties for think-aloud studies:
- Skilled behavior is often automated behavior (as discussed further in our seminar on the human mind and usability). When people are unaware of how they think about a certain behavior, they can't verbalize the reasoning behind their actions. For example, consider how you drive a car: If a facilitator asked you to think aloud during your drive to work, you typically wouldn't describe the steps you take to make the car go faster or slower. But, if you usually drive an automatic and were given a stick shift for the study, you'd probably verbalize your thoughts about using the clutch. In contrast, if you drive a car with manual transmission every day, odds are that you wouldn't mention the clutch during the think-aloud session. To get around this problem with expert users, you can use more elaborate usability methods to slowly (and tediously) analyze slow-motion replays of the interaction and thereby deduce what was going in those cases where users didn't tell you.
- Expert users can turn into design critics and bend your ear with their opinions on the product (since they know it so well), as opposed to staying in the user role and engaging with the actual features. Gracefully accept their comments, while remembering that what people say and what they do can be very different. The reason for usability studies is to collect behavioral data, so guide participants back to the role of using the design as fast as possible.
If you're redesigning an existing product, you're in luck: there will already be expert users in the wild. All you need is to recruit them to come visit for an hour or two. You can follow traditional methods for recruiting test participants, with a few twists. Often, a list of registered customers can short-circuit the tedious process of cold-calling study prospects. Sometimes, you can also work with managers of key accounts to contact some of their users of your product. (If so, emphasize that you want average users — not the "star performers" that managers often like to show off.)
For a website, you might be able to recruit existing users by posting a request on the site or by asking for volunteers in your email newsletter (which has the added benefit of reaching your most loyal users).
Other options for recruiting expert users include industry tradeshows (particularly if your company has a booth), user group meetings, and social media (especially your own site's social features).
Growing Experts Quickly
The question that prompted this column concerned a new product. In this case, expert users don't exist because the product as yet has no users.
You can't recruit people who don't exist, so you'll have to create your own experienced users. First, you can violate the rule against testing internal staff. Usually, we don't want to test anyone involved with a design project or the company itself (except for intranet studies, of course). These people know too much, so we won't discover usability problems that stem from the misconceptions of outside users.
Knowing too much becomes a partial virtue in expert studies. Internal users are still not ideal test participants because their mental model of the system will have a much better structure than anything grown organically through mere exposure to the surface manifestation of the UI.
As an example, consider a website's structure: outside users must deduce the structure from the navigation design. Good navigation does give users a clue about the IA, but that's not their primary concern. Users are on a site to get things done, so they don't tend to pay close attention to the site structure, nor do they retain much of this knowledge from one visit to the next. In contrast, internal users' existing knowledge about the product line and the company's way of doing business forms a conceptual model that helps them grow a better mental model of the site structure.
Thus, when you test internal experts, remember that they'll know more and behave differently than external experts.
A second, and better, approach is to grow fresh experts by fast-tracked training of new users. You can assign a personal tutor to take test participants through the design and answer all of their questions. (Usually, we don't answer users' questions, because we don't want to bias their behavior, but if we're testing expert use and not initial use, we care less about how people overcome early difficulties. Thus, coaching is allowed.)
You can give test participants plenty of time to practice with your design before you start the study. Doing so can be cost-effective because you don't have to monitor them closely during practice sessions. Of course, if users need a week's practice to gain expertise, you have to pay for a week of their time, but at least you don't have to sit next to them all week. Just give them a cubicle and check in on them periodically (and give them a hotline number so they can call the tutor if they have questions).
Training and Manuals
Providing extra-supportive training is one way to quickly produce new experts. You can also use the regular training courses or instructional material if you're testing a product that offers such user support. For a new product, you can even make this part of the test plan and get empirical data on the usability of the training materials themselves.
The master guideline for all user research is to approximate the real world as closely as possible. So, if you have a manual or offer a training course, it's fine to let your test participants access these resources.
Unfortunately, in the real world, not all users will take the training course, even if you offer it upon initial application rollout. New hires who come onboard the following year are lucky if their colleagues take the time to fill them in on whatever they remember from the training sessions. (Sadly, most people remember very little from a course they attended a year ago.)
Similarly, you might offer a manual or users' guide at time of purchase, but this documentation might be long lost by the time new employees are asked to operate the machinery or application.
Thus, it's usually best to run some usability sessions with the docs and training, and some without this info.
Expect Smaller Improvements
The rule of thumb for usability's return-on-investment is that you can double the desired business metric (such as conversion rate) the first time you conduct user testing. Particularly on websites, you'll usually find at least one horrendous user-repelling design element that insiders never find problematic, but that's costing the company a fortune in lost business.
For expert users, the improvements from usability research are typically lower. The good news is that human beings are incredibly flexible and adaptive creatures. We live from Greenland to Equatorial Guinea and we can use Linux if we try hard enough. People who've used a software product for a decade will have invented workarounds and tricks to overcome its design flaws. And they will have internalized many of the arbitrary rules behind the UI. Many people are gluttons for punishment and grow to like bad design so much that they resist the change to something better. After all, they already know how to use the difficult UI, so why change to an easy one that will require some amount of learning? (Users have a strong bias in favor of doing instead of "wasting" time learning.)
Because experienced users will have adapted to the old design, the potential for enhancing their performance is usually less than the 100% we often get for new website visitors. On average, a 1/3 improvement is more realistic.
Should You Cater to Experts?
It's more expensive to study usability for expert users, and the expected improvement is smaller. So why do it? Several reasons:
Users are novices for a short time but experts for a long time — at least for any product that they continue to use. Thus, the (smaller) benefit of better expert usability continues to accrue for more years and will eventually sum to much more than the one-time gain from novice improvements.
Some products have a large installed base with a large existing pool of users that's not expected to grow much. To compute the true gain from any usability advances, you multiply the per-user gain by the number of users, so this scenario also favors a focus on expert users.
Some products see heavy, repeated use. An app for call center reps is a classic example. Other products might see only intermittent use, but the impact of good vs. poor user performance is immense. Error handling in industrial control rooms is a good example here. In these cases, users might be highly trained and novice usability less of a concern, but it's crucial to nail expert usability.
Finally, heavy users might account for a disproportionally large share of profits, particularly on those e-commerce sites that cultivate loyal customers. On such sites, you want to make frequent, big purchases particularly easy.
Whatever the reason, it's often worth investing the extra usability resources to improve the user experience for experts.