As long as you're testing within a single country, there's no reason to expend resources traveling to multiple cities and conducting the same usability study again and again. You'll simply
observe the same behaviors repeatedly, and learn nothing new
. Better to save your budget and spend that money on new tests of either additional design ideas or your competitors' designs.
This conclusion -- that the test location doesn't matter -- is different than the usual lesson from market research, where you find different results in different regions of the country. It's therefore common to conduct focus groups in 4 to 5 cities, or more if the budget allows.
Because traditional wisdom recommends conducting research in multiple locations, we've done so for many projects over the years. But, except for the few special cases discussed below, we've always identified the same usability findings, no matter where we tested. By now, we can clearly conclude that it's a waste of money to do user testing in more than one city within a country.
Behavior vs. Opinion
Why does usability differ from market research when it comes to the number of required study locations? Because with usability, we
test behavior, not opinion
. Further, we test that
behavior with a defined artifact
(i.e., a specific user interface).
People obviously have different attitudes in different regions, including differences in what they'll pay for a given product and in how many people will want the product in the first place.
But when it comes to reacting to a set of interaction design options, people usually interpret the screen elements the same, no matter where they live. What's easy in one city is just as easy in another city. For example,
facilitate navigation of hierarchical websites equally well in Los Angeles, New York, or Boise, Idaho. You don't need to test your breadcrumb design everywhere. Similarly, many Web users rely on search, not because they live in a rural or an urban environment, but because search is an inherently useful way for users to gain control of a vast and diffuse information space.
Why One Location is Enough
To show why a single testing location is sufficient, we'll use a parking meter as an example.
In some cities, paying for parking would be considered outrageous: like charging for the air you breathe. In other cities, 25 cents per hour would be acceptable, while in yet other cities, several dollars per hour might be the norm. So, doing market research on parking fees would definitely have to be done as a separate project for each city.
But, let's say that we're testing parking meter
: Can users determine where they're supposed to insert their money? Can they understand the instructions about the cost per hour? Do they notice the display showing the time they've paid for? Do they understand that the number changes to indicate remaining time?
The answers to these questions are crucial for the meter's interaction design. The answers would also be the same everywhere, which is why the design team would only need to
test in one location
In some cases, multiple tests might be beneficial. First, if a parking meter is intended for a region that had never before charged for parking, it might be a good idea to test with users who were completely new to the parking meter concept. Novices would doubtlessly encounter more usability problems than more experienced users. Second, cities with very high parking fees might need a feature that lets users pay with a credit card or dollar bills instead of coins; this different user interface would require a different test. Note: you can meet both of these additional testing needs without traveling to a new city -- it's a matter of which users you test, and which features you ask them to try.
In most cases, differences are due to
diversity in the users' circumstances
, not to geographical variation. It's therefore best to
recruit a diverse set of users
: some experienced users, some novices; some young, some old; some doctors, some nurses; some loyal customers, some who swear by your competition. Usually, you can cover the spectrum of user profiles in one location. What matters is the differences between people and their behaviors, not the differences between cities.
Another reason to limit your tests to your preferred city? It's usually where you're based. When you test in your own city, it's much easier for other
members of the design and development team to observe the test
. Yes, in theory they could watch a video recording. In practice, however, nothing beats watching a customer live as they use your design. Having the direct, personal experience of observing user test sessions is a powerful motivator that gets team members to buy into the usability findings. Thus, even if there might be a theoretical benefit from testing in another city, you'll usually get more usability findings implemented if you test on your home turf.
When to Test Elsewhere
As often in usability, there are exceptions to the general rule. In a few cases, test location does matter.
a single industry dominates an area
to the extent that it's considered a company town. Such dominance might be due to a single company or to several similar companies. Example
Detroit, if you're testing a car site
Washington, DC, if you're testing a government site
Hollywood, if you're testing a movie site
Downtown Manhattan, if you're testing an investment site
Brussels, if you're testing a European Union site
Silicon Valley, if you're testing a technology site
As an example of the latter, I remember a test I ran about 10 years ago. In
recruiting our test users
, we followed usual procedures and screened out anybody working in usability, Internet marketing, interface design, graphic design, or programming. Unless they're your target customers, you don't want such people in a test because they can't stay within their
. They always have to be
as well, and comment on how your design compares with their pet theories on what makes a good website. The entire idea of usability testing is to observe how customers use your interface, not to hear them speculate on how other people might use it.
In this case, despite our best screening efforts, one test participant turned out to be just such a critic. Although she met all of our qualification criteria, toward the session's end it emerged that
her roommate was a marketing manager at Yahoo!
She had become a Web insider by osmosis.
In general, the problem with company towns is that the locals
know too much about the industry and its products
. They are also too interested in industry gossip, and might actually read arcane PR announcements that average customers would never notice.
Because of these differences, locals have a much easier time using your website than people in other cities. To get a realistic impression of the user experience for 95% of your customers, you have to
test in a location that's not dominated by your industry
Another case where test location matters is for
, which can obviously be studied only by testing in multiple countries.
, it's often best to test the design at both the corporate headquarters and at a field or branch office. Employees at non-HQ locations frequently have different levels of knowledge about company events, and they might also use applications differently or emphasize different features. Furthermore, including people outside HQ is a sound political move for the intranet team's usability effort because it makes the intranet seem less like something imposed from above.
products are simply not used
(or are used very differently) in some locations. For example, you shouldn't test a website to sell a particularly powerful home heating system in Florida, where most people don't have experience with high-end heaters. Similarly, you shouldn't test a tourist site in the city it's promoting, because it's targeted at travelers coming from elsewhere.
Despite such exceptions, the general rule remains clear: Usability findings are typically the same, no matter where you test. So, save your travel budget and conduct your studies in a single city. If you don't believe me, you can always test your next design version in a different location and see if the change in venue generates any new findings. Chances are, it won't.