Summary: Users now do basic operations with confidence and perform with skill on sites they use often. But when users try new sites, well-known usability problems still cause failures.
Enemies of usability have two counter-arguments against design guidelines that are based on user research:
- "You're testing idiots — most users are smarter and don't mind complexity."
- "You were right in the past , but users have now learned how to use advanced websites, so simplicity isn't a requirement anymore."
I decided to put these claims to the test in a new study we're currently conducting. We'll use the new insights generated by the study to update our course on Fundamental Guidelines for Web Usability.
Because we're testing this year's sites with this year's users , the study automatically assesses the second claim.
We can't directly assess whether our study participants are idiots, since we don't subject them to an IQ test. But participants' comments during all of our studies these past 14 years indicate that we've mainly had plenty smart test users. Unless a specific study calls for participants with a different profile, we mostly recruit people with respectable jobs — an engineering consultant, an equity trader, a lawyer, an office manager, a real estate agent, a speech therapist, and a teacher, to take some of the job titles from the first week of our current study.
One part of the current study tests B2B sites since many of our seminar audience work on such sites. This time, we chose sites targeting dentists in clinical practice, IT managers from big corporations, and CEOs of small businesses. Thus, we have disproportionally many users with these job descriptions. They aren't stupid.
One way of quantifying the level of users we're currently testing is to look at their annual income. In our screening, we look at the user's personal income, rather than his or her household income. We also recruit an equal number of people making: below $50,000, $50,000–99,999, and $100,000 or more. The following table compares our users with the entire U.S. population (according to the Census Bureau) within the study's target age range (20–60 years; we've covered kids, teens, and seniors in other research):
|User's Annual Income||Our Participants||
We're definitely testing people who are much more successful than the average. We decided to bias the study in favor of high-salary users for three reasons:
- We need to test many business professionals and doctors because so many of our seminar participants target these groups, whether for websites or intranets.
- Wealthy users have more money to spend and are thus more important to seminar attendees who work on e-commerce sites.
- Even conference attendees who target a broad consumer audience benefit from presentations that are based mainly on studies of wealthy users because that fact helps them overcome the "dumb users" objection when they take the guidelines back to their teams.
We're not neglecting poor people — we have enough of them in the study to learn about their needs. But our participant profile is clearly such that no one could claim that the findings don't apply to high-end users.
Improved User Skills
So, with the qualifications about our research out of the way, what have we found in recent studies? We've seen several indications that users are indeed getting a bit better at using the Web. Almost all users:
- are better at physical operations, such as mouse movements and scrolling;
- are more confident at clicking, and less afraid that they'll break something; and
- know the basics of using search and use it more often than we saw in the past.
- some users are exhibiting expert behaviors, such as opening a second browser window to compare two websites or changing a PDF file's zoom level.
When performing common tasks on sites they often use, most users are incredibly fast and competent. This fact leads us to two interesting conclusions:
- Many sites are now good enough that users reward them with loyalty and frequent use.
- When people revisit such sites, they tend to do the same things repeatedly and develop a high degree of skilled performance — something we rarely saw on websites in the past.
As an example, one user failed almost every task on unfamiliar websites, yet was highly confident and extremely fast in using her bank's site to transfer money between two of her accounts.
Browsing and Research Skills Still Poor
Even though users are remarkably good at repeated tasks on their favorite sites, they're stumped by the smallest usability problems when they visit new sites for the first time .
People are very bad at coping with information architectures that deviate from their view of the problem space. They also fail to readjust their content interpretation to compensate for changing contexts . For example, when users jump from one IA area to another, they often continue to think that the information addresses the previous topic.
Users are also overwhelmed by the sheer amount of information that many sites dump on them. For example, a beginning investor tested E-Trade, which could be a great site to support his initial investments and might gradually grow his site involvement over time. Instead, E-Trade's first few pages were littered with scary jargon like "ADR" and "ETF." To escape, he clicked the Active Trading link, assuming this would help him understand how to trade. In fact, it took him to an area for highly experienced investors and it had even more mumbo-jumbo. So, this hot prospect concluded that he didn't dare open an E-Trade account.
First-time visitors to a site don't have the conceptual model needed to correctly interpret menu options and navigate to the appropriate place. Lacking this contextual understanding, they waste time in the wrong site areas and misinterpret the area content.
People's reading skills are the same as they have always been, emphasizing the importance of writing for the Web In earlier research, we have studied lower-literacy users, but even the higher-literacy users in our current study had problems with the dense content on many sites. For example, when testing NASA.gov, we asked users to find out when the rings around Saturn were formed. One user did find a page about Saturn, but ended up picking a wrong answer, 1980, which is when additional ringlets were discovered.
To help new users find their way, sites must provide much more handholding and much more simplified content.
Making comparisons is one of the most important tasks on the Web, and yet users have great difficulty doing so on most sites. The test participants were particularly happy with those websites that do the comparing and consolidating for them, like kayak.com.
Why worry about new users' ability to understand your site when your experienced users are clearly having a jolly old time performing frequent tasks? Because people develop into loyal, experienced users only after passing through the new-user stage . To grow your business, you have to accommodate first-time visitors for whom small difficulties loom large and often spell defeat.
Also, it's important to expand your loyal users' interaction vocabulary to further increase their loyalty. Because they move so fast, experienced users don't waste much time learning new features. Users have tunnel vision on their favorite sites: unless a new feature immediately proves its worth, users will stick to safe, familiar territory where they can quickly accomplish their tasks and leave.
By now, our test participants have extensive experience using the Web (mostly 3+ years), and they're still running into substantial problems online. Waiting for people to get even more experience is not likely to resolve the issues. Websites are just too darn difficult.
Users live by search, but they also die by search.
People turn to search as their first step — or as their second step, if their first attempt at navigating fails. Users typically formulate good initial queries , and vaguely understand how to tickle the search engine into coughing up desired sites when they appropriately modify their main keywords . For example, in our new study, a user looking for a modest gift for a football fan searched for "football trinket." Five years ago, such a user would most likely have searched "football" and been buried by the results.
Still, today's users rarely change their search strategy when the initial query fails. They might modify their first attempt, but they typically stick with the same general approach rather than trying something genuinely new.
For example, one user tested the Mayo Clinic's site to find out how to ensure that a child with a milk allergy would receive sufficient calcium. The user attempted multiple queries with the keyword "calcium," but never tried the words "milk" or "allergy."
Also, users are incredibly bad at interpreting SERP listings (SERP = Search Engine Results Page). Admittedly, SERPs from Google and the other main search engines typically offer unreadable gibberish rather than decent website descriptions. Still, an expert searcher (like me) can look at the listings and predict a destination site's quality much better than average users.
When it comes to search, users face three problems:
- Inability to retarget queries to a different search strategy
- Inability to understand the search results and properly evaluate each destination site's likely usefulness
- Inability to sort through the SERP's polluted mass of poor results, whether from blogs or from heavily SEO-optimized sites that are insufficiently specific to really address the user's problem
Given these difficulties, many users are at the search engine's mercy and mainly click the top links — a behavior we might call Google Gullibility . Sadly, while these top links are often not what they really need, users don't know how to do better.
I use "Google" in labeling the behavior only because it's the search engine used by the vast majority of our test users. People using other search engines have the same problems. Still, it's vital to reestablish competition in the search engine field: it would be a tragedy for democracy to let 3 guys at one company determine what billions of people read, learn, and ultimately think.
Our work is generating many interesting new findings on questions such as: What makes a website credible? What inspires user loyalty? We're running more studies to dig into these issues, which are among the most important for improving website profitability over the next decade. Once we've analyzed the mountains of data we're collecting, we'll announce the new findings at our upcoming usability conference.
For now, one thing is clear: we're confirming more and more of the old usability guidelines. Even though we have new issues to consider, the old issues aren't going away. A few examples:
- Email newsletters remain the best way to drive users back to websites. It's incredible how often our study participants say that a newsletter is their main reason for revisiting a site. Most business professionals are not very interested in podcasts or newsfeeds (RSS).
- Opening new browser windows is highly confusing for most users. Although many users can cope with extra windows that they've opened themselves , few understand why the Back button suddenly stops working in a new window that the computer initiated. Opening new windows was #2 on my list of top-10 Web design mistakes of 1999 ; that this design approach continues to hurt users exemplifies both the longevity of usability guidelines and the limited improvement in user skills.
- Links that don't change color when clicked still create confusion, making users unsure about what they've already seen on a site.
- Splash screens and intros are still incredibly annoying: users look for the "skip intro" button — if not found, they often leave. One user wanted to buy custom-tailored shirts and first visited Turnbull & Asser because of their reputation. Clicking the appropriate link led to a page where a video started to play without warning and without a way to skip it and proceed directly to actual info about the service. The user watched a few seconds; got more and more agitated about the lack of options to bypass the intro, and finally closed down the site and went to a competitor. Customer lost.
- A fairly large minority of users still don't know that they can get to a site's homepage by clicking its logo, so I still have to recommend having an explicit "home" link on all interior pages (not on the homepage, of course, because no-op links that point to the current page are confusing — yet another guideline we saw confirmed again several times last week). It particularly irks me to have to retain the "explicit home link" guideline, because I had hoped to get rid of this stupid extra link. But many users really do change very slowly, so we'll probably have to keep this guideline in force until 2020 — maybe longer. At least breadcrumbs are a simple way to satisfy this need.
- People are still very wary, sometimes more so than in the past, about giving out personal information . In particular, the B2B sites in this new study failed in exactly the same way as most B2B sites in our major B2B research: by hitting users with a registration screen before they were sufficiently committed to the site.
- Non-standard scrollbars are often overlooked and make people miss most of the site's offerings. The following screens show two examples from last week's testing.
On the Carl's Jr. hamburger chain website, we asked users to look up nutritional information for various meals. Many participants thought the quick item view menu covered only breakfast items, because those were the only choices visible without scrolling (see above). Users overlooked the non-standard scrollbar, and instead often suffered through the PDF files available through the nutrition guide link. (These PDF files caused many other problems, confirming more age-old usability guidelines. That said, some users are now skillful enough to adjust PDF views so that they're slightly more readable. Still, it's a painful process.)
On the Sundance Resort's site, one user was thrilled to see photos of celebrations hosted at the resort. She eagerly clicked through all five visible thumbnails (see above), but never noticed the small triangles at the top and bottom that let users scroll to see more photos (the circle at left shows the bottom arrow at full size).
Web usability guidelines are not the only guidelines our new studies confirm. On VW's site, we asked participants to use the configurators to customize a car according to their preferences. Unfortunately, this mini-application violated some of the basic application usability guidelines, causing people many problems.
As the above screenshot shows, users can select their car's wheel style from two options (shown below the car). This simple operation was difficult and error prone, however, because the option for the wheel that's currently mounted on the car was grayed out — a GUI convention that's supposed to mean that something is unavailable , not that it's the current selection. It would have been much better to show both available wheels at all times, placing a selection rectangle — or some other graphical highlighting convention — around the current selection. (Poor feedback is #4 on my list of top-10 mistakes of application design.)
Interface conventions exist for a reason: they allow users to focus on your content (in this case, the car and its options). When all interface elements work as expected, users know how to operate the UI to get the desired effect. Conversely, when you deviate from user expectations, you erect a great barrier between users and their ability to get things done. Some designers think this makes the site more exciting . In reality, non-standard design makes the site more frustrating and drastically reduces the user's chance of success. Users are thus more likely to quickly leave the site.
In VW's case, the designers probably suffered from a case of metaphor overload: the design mimics the experience of actually assembling a physical car in a real workshop. If you had two wheels on the workshop floor and mounted one on the car, then the chosen wheel would no longer be on the floor.
In reality, though, users are not greasemonkeys. They're clicking on interface elements, and they expect the picture of a wheel to behave like a GUI element.
We're confirming hundreds more of the existing usability guidelines every week as our testing continues. Even though we have upscale users and it's a new study testing new sites, most of the findings are the same as we've seen year after year after year. Usability guidelines remain remarkably constant over time, because basic human characteristics stay the same.