If product teams don’t know what to do with usability results, they’ll simply ignore them. It is up to usability practitioners to write usability findings in a clear, precise and descriptive way that helps the team identify the issue and work toward a solution.
One of the keys to helping teams improve designs through usability is making results actionable. Running a test, analyzing results, and delivering a report is useless if the team doesn’t know what to do with those results. Usability practitioners — particularly those new to the field — sometimes complain that teams don’t act on their studies’ results. While this may be due to a myriad of issues, it is often caused or exacerbated by problems with the findings themselves.
Usability findings have to be usable themselves: we need meta-usability. Below are 5 tips for practitioners, old and new, to improve the usability of their results. These tips are also handy for managers, clients, product teams and customers of usability reports to help better assess the value of the reports they’re receiving.
Vague findings don’t give product teams much to work with. A lack of detail or explanation can leave teams wondering what the problem was or at a loss about how to fix it.
A finding of “Registration was hard” doesn’t identify the problem or hint at solutions. Why was registration hard? Could users find it? Were fields for existing users to log in confused with fields for new users to register? Were there too many questions? Were field labels unclear? Did the form ask for information the user didn’t have, didn’t know, or didn’t want to share? Were any steps unexpected? Were buttons poorly placed?
Findings need to be specific: “The button to register for the site had poor color contrast and faded into the page background.” Whenever possible, identify the specific area of the design, flow, or interaction that caused the user to have a problem.
Don’t blame the user
It’s easy to fall into the trap of writing results in relation to the user. “The user was able to do this.” “Three users could not find that.” This type of finding focuses on the user, rather than on the design. This is a problem because it seems to blame the user. It’s very easy for a team member to read a finding and think, “OK, that user couldn’t find the link, but others can” and dismiss the issue altogether.
This also makes it hard for teams to know how a user activity translates into a change in the design. Findings should explain the elements of the design that confused users or led them down the wrong path. Was the navigational structure of the site unclear? Was a label or link poorly named? Did the way the site was organized match the users’ expectations?
When starting a usability session, we always tell the test user, “we’re not testing you, we’re testing the system.” This is not propaganda — it’s really the most fruitful way of thinking about the test results.
Look for the bigger picture
It is easy to become so focused on the details in a usability test that a huge issue goes without notice. While the smaller issues are important to identify, it can’t be done at the expense of seeing larger issues with the design. For example, changing button designs and link names may improve steps in a process, but it can’t fix a process that doesn’t match user expectations or meet their needs.
Focusing too tightly on the details in a report can cause teams to add many band-aids to a design that’s really suffering from a broken leg. If users had trouble at every step, perhaps it’s the overall flow or structure that’s to blame, rather than small details along the way.
The reality is many projects don’t have the time, resources, or budget to fix large issues with designs — at least not in the near future. So keep the smaller details in the report, but include those findings about larger, over-arching issues so they don’t get overlooked.
Help identify solutions
On most teams, the usability professional’s job is to identify issues with the design; fixing the design is the job of design and development. However, usability experts often have unique expertise in thinking about design solutions. They have first-hand knowledge of what worked — and what didn’t — in usability testing and may have years of experience in understanding what does and doesn’t work in a design. They can offer insights about potential design solutions.
However, usability reports suffer when usability practitioners overstep their roles. Findings should not take the form of elaborate wireframes reworking the whole design. Handing over a set of new wireframes rather than a list of findings can cause resentment on the team. A quick mock-up here and there to illustrate a point is acceptable, but a full redesign document can quickly distract the team from the value of the findings.
Work closely with the design and development team, rather than simply delivering a report and walking away from the project. When results are presented as a discussion the usability expert, who witnessed the problems (and successes) users had with the design first-hand, can add expert insights into what design solutions may or may not address the problems seen in the design. Schedule and participate in meetings where the team decides how to address issues that came up in testing. Better yet, invite team members to usability sessions and debrief with the team between sessions.
Adding redesign recommendations to usability reports is another option. Such recommendations can help the team understand the issue and start to think of potential solutions. Suggest moving a button or changing a label, combining navigational categories or writing more explicit link names. But present recommendations as recommendations. Label them as such and explain to the team in writing or in person that the recommendations are intended to jumpstart thinking about design solutions and to illustrate usability issues, and are not presented as the only or best solution. A creative designer may come up with something even better.
Organize and rank findings
Every issue that’s discovered through usability testing is not equally important. Further, a usability report may have 5 or 100 findings, depending on the scale of the study, the design tested, and the usability practitioner. Teams need a way to parse through the findings and discover 1) what’s relevant to the screens, designs or elements they have responsibility for and 2) which problems were the biggest issues from a usability perspective.
Findings should be grouped with similar findings, meaning that there may be a section about navigational issues and other sections about particular pages or task flows. Beyond this, findings should also be ranked in terms of severity. Was the problem a slight hiccup that caused one user to stumble? Was it a problem seen in only 2 sessions, but which derailed the users’ entire experience? Was it something so big that the product can’t possibly be successful unless the issue is addressed? Ranking findings as low, medium or high severity helps the team understand what critical issues the usability study exposed. Don’t forget to include positive findings as well, letting the team know what’s already working.
Accrue future value from descriptive reports
The advice in this article provides immediate value by increasing the chance that usability insights are acted upon in the current design phase. This again increases the ROI from the organization’s usability investment by making the product better, resulting in higher conversion rates, higher customer satisfaction, fewer errors, and other measurable benefits. (Findings that are not acted upon don’t generate profits.)
Better descriptions of the study findings also provide future value by enhancing the organization’s cumulative knowledge about its customers. If you maintain an archive of usability findings people can draw upon these insights as they plan and execute future design projects. No reason to make the same design mistake a second time.