One goal beats all others when designing a customer survey for a website:
maximize the response rate
. Low response rates can create actively misleading survey findings because they're likely to be based on a biased sample of your most committed users as opposed to most users (who have better things to do than take your survey).
It doesn't matter what you "learn" from a survey; you can't trust the data if it doesn't represent your users.
How can you get average users to respond? The highest response rates come when surveys are
quick and painless
. And the best way to reduce users' time and suffering is to reduce the number of questions.
Of course, you also must ensure that the survey is easy to operate and that users understand the questions and response options. If users misinterpret your writing, their answers will be misleading. Remember that an interactive questionnaire is a user interface: it should be designed on the basis of user testing and follow standard usability guidelines. (With
you can pilot three iterations of a survey in an afternoon.)
One of the ten biggest U.S. banks recently subjected business customers to a 32-screen survey. And they weren't easy screens, either. Many screens required users to evaluate four different business banking services on six parameters. My test user gave up after three screens, saying,
"I'm a small business owner. I don't have time for this."
(Later, I persisted in completing the full survey myself, but only so I could collect a horrifying set of screenshots for my next book.)
is a natural consequence of having a diverse group of marketing managers, all of whom want customer feedback on their special issues. Please resist the temptation to collect all the information that anybody could ever want. You will end up with no information (or misleading information) instead.
Ask Fewer Questions — Maybe Only NPS
The simplest solution to survey bloat is to ask fewer questions. Ask questions that address only your core needs and skip the subtleties. Surveys are not great at gauging minor differences anyway — you need direct observation for that.
A recent article in
Harvard Business Review
The One Number You Need
" documented that the vast majority of customer-satisfaction insight comes from answers to a single question:
"How likely is it that you could recommend [X] to a friend or colleague?"
In 13 of 14 case studies, this one question was as strong a predictor of customer loyalty as any longer survey.
The percent of users who would "very likely" recommend minus the percentage who are "very unlikely" to recommend is often called the
Net Promoter Score (NPS)
. The point being that people with no strong feelings either way don't count that much.
For discretionary intranet services, you could rephrase the question as
"How likely is it that you could recommend [service X] to a colleague?"
For mandatory intranet services, you need a different question to assess satisfaction, because employees might "recommend" a problematic service simply because there are no other options.
Divide and Conquer
A second solution is to
ask different questions of different visitors
. If you have ten questions that absolutely need answers, divide them into five different questionnaires with two questions each that
fit on a single screen
. You can then randomly assign one of the five surveys to each visitor.
Your website is a computer: take advantage of its ability to run software and show different users different things. (Alternatively, you could run a different survey each weekday, assuming there are no significant differences between visitors on different days.)
The only time you must present multiple questions to the same user is when you want to run regression analyses or other multivariate statistics. But, in my experience, most Web people greet any mention of multivariate stats with blank stares, so it's probably rare that you'll actually use them.
Short surveys are better surveys
. They certainly provide better data than bloated surveys, which few people will finish (if they start them at all).