The collective brainpower of the Internet is an awesome beast that used to manifest itself on Usenet newsgroups. Most of these groups have degenerated into spam, flames, and newbie ignorance. The Web has not yet evolved good ways of utilizing this power, since most so-called " community " sites are equally degenerate.
I have always been impressed by the Web knowledge exhibited by the Alertbox readers, so as an experiment in collective brainpower, I decided to collect solutions to a specific Web usability problem from my readers. The full problem statement is available as a sidebar, but briefly the question was how to get feedback from users to determine usability and future design directions for a large Web archive of historical documents .
I received a large number of great suggestions and analyses, so even though a website has a larger "news hole" than a print publication, I have still had to be very selective. Here are the best solutions.
The Survey Form
Dave Koelle from Raytheon Systems Company writes:
The primary goal of users who will access the Pepper Library will be to access historical documents; additional indirect work, such as responding to a survey, may be glossed over or ignored . This is one of the reasons why collaborative filtering technologies, such as Firefly, were not as successful as anticipated.
Therefore, I think the survey should be as simple as possible, while giving the user the option to add more information. Perhaps the most important question (maybe, "How usable did you find this site?") should be a multiple choice question, or a scale of 1 through 10. Asking for an actual typed response requires too much work on the user's behalf. There should, however, be an optional textbox for the user to volunteer feedback (and this may be your most useful feedback). I think this arrangement will appeal to users, thereby providing you with the most responses.
Jenna Burrell from Cornell University writes:
I run a web site which provides links to educational resources for high school students. I've had a user survey which has gone through several rewrites since I set it up in 1995.
What I've found most useful based on the survey responses I've received is to ask what topic the user was looking for and whether or not they were able to find the desired information. Often users will suggest a topic I hadn't thought of including. Sometimes they will list a topic that I do have information on and then note that they were not able to find it. This helps me to rethink the organization of that specific subject, the keywords I used in the description of the relevant links, and where I placed the relevant links the user wasn't able to find.
I also think it's important to include a textbox that allows users to fill in whatever questions or comments they have. It allows users to describe any problems they encountered and oftentimes it has helped me to think of new questions I want to ask in the survey.
And, most importantly, keep the survey short , or no one will want to answer it.
Jonathan Spencer from CyberArtisans writes:
Your survey must balance the desire to get the most information from each user with the tendency of most users not to fill out a form they don't have to fill out. To that end, I would suggest the following:
If you want a lot of information, offer your users something in return for filling out the survey form. Maybe a copy of the usability study, a printed copy of one of the documents or photographs, or something you feel comfortable distributing that you also think would be attractive to a user.
You might want to make the survey in several stages, each stage offering some "goodie" in return, or maybe require that the user fill out all 3 or 4 stages to get the goodie.
You might also create several sets of questions and provide them randomly, so that if someone happens to be sitting next to someone filling out the form, when they take the survey they will encounter different questions, thus forcing them to think rather than filling in what they remember the first person doing.
In any case, the survey should not be long, so the user can fill it out quickly and move on. You will get many more responses.
Arrange the site so the user fills out the survey immediately after using the site -- perhaps the user will encounter the survey after selecting the desired titles or abstracts and submitting the request.
If you don't want to offer goodies, then you should make the survey very short, just a very few questions that the user can deal with quickly. Again, it can be done in stages, especially if your users tend to come back repeatedly.
Very Small Surveys
John Arnold from Lex Vehicle Leasing writes:
When it comes to gathering feedback I find that the times when I bother to respond to a question are the times when it least interrupts my information gathering.
I dislike the method Microsoft use on their support website - they have a question at the bottom of the page asking "did this page give you the information you were looking for?". The question is large and seems to be given as much importance as the page content. As an information seeker I'm far more interested in the content than in filling in a survey. Worse still, giving a response to their question takes you to another page - one without the original page content. Thus I lose the information I came for. Unacceptable.
On the other hand, the idea of just asking a single question is good. No intrusive, privacy bashing questions about name, address, sex, income etc. Just find out what you really want to know "Did this site perform?".
ZDNet and MSNBC among others often include a small survey question in a side bar on their pages. These really work for two reasons. Firstly, filling them in doesn't lose you the article information - the destination page includes the article again. Secondly, there is a reward for answering the question - you get to see the latest results. The second point is important. If there is a reward, however small, for answering the question then users will respond.
Most important of all, though, is that the user came for information. Take that off their screen at your peril!
Just as a quick technical solution - how about asking the question in a pop-up window? Give the results/reward in the same window. I am NOT proposing that each page load should automatically pop-up a small question window - that's REALLY annoying. There should be a button to open the question window and the results/reward page should have a clear CLOSE button to get rid of it again.
Don't Be Boring; Get Feedback on Live Alternatives
Michael "Mac" McCarthy, President of Web Publishing Inc. writes:
The biggest problem with reader surveys is that readers don't know what they want , can't articulate what they want, and articulate things different from what they actually do want based on how they later act when you give them what they thought they wanted.
Give people specific choices to look at, especially live choices, and see how they act, is a good way. We made a dramatic change in how the home page of JavaWorld was organized, and tested it by offering the new version to visitors to the old version: " Test our new home page and tell us what you think! " They clicked through to the new designed, then filled out a form saying whether they liked it or not; 94% said they preferred it strongly -- a forceful enough response to overwhelm our reservations.
Long surveys aren't the problem, boring surveys are . For various reasons we tend to ask generalized questions, in boring academic phrasings, with dry "objective" choices. The solution is not to hide the length of the survey with multi-page surveys ("Next..."), which just makes you feel like you're on a treadmill.
One solution is to at least sometimes ask an interesting question and give interesting choices. Be human, too.
"I thought this story was.... a) great! b) pretty good; c) whatever; d) yawn; e) a waste of electrons; f) n.a. - not my kind of story, so who am I to judge? g) other (____)."
Or give the background so I know why it matters: "We're stuck on a redesign -- give us a hand, will you?...
"We're considering a redesign that would create three columns on the screen: The middle column would have the article, the left would contain navigation and house ads, the right column paid advertising. This would obviously squeeze the text into a narrower column than it's in now. Some of us think this great -- shorter scan lines supposedly make this stuff easier to read whether on the screen or printed out. Others think it makes our (already long) stories seem REALLY too long, especially when you print it out. Help us figure out which design works better by checking these two samples and then telling us how you like it."
We've gotten better response to folksy quizzes than formalistic ones -- and more essays in reply to open-ended questions, and the answers seem more useful. The worst thing about most surveys is once you've digested the responses and your most common reaction is "That's interesting - I wonder what they meant by that answer??"
Beyond Simple Surveys
Susan Druding , webmaster of the Free Speech Movement Archives writes:
I'm not sure that [server log statistics are] the right approach to use. Until people access the information online you won't really know what their usage will be. Having people accessing the real material is the only way to see what they really focus on. I would suggest that you gear what you put on line and in what order on the basis of what people most frequently request in person in your real archives at Florida State . I think you would do better to study the access levels for the information that most seems to interest people and put that up first in descending order of requests at your Claude Pepper site.
I would strongly recommend that you set up some sort of "C. Pepper Archives Newsletter" on your site as soon as you open the site. Assure people that by signing up they will ONLY hear news of your archives and site and that you will not share the list with others. People DO sign up for these newsletters! You will be building a base of interested people to whom you can send site news and to whom you can send SHORT surveys or direct them to a SHORT online survey when you have one ready. We have a newsletter set up on the FSM-A site and receive a few a week. I have another Web site I do (on the art of Quilting ) and I've had an immensely good response setting up a newsletter. I have thousands who have signed up. I keep it short, no advertising and just focus on a few topics in each.
Daniel Schwabe from PUC-Rio in Brazil writes:
First, select users that could be considered representative of you target audience, if at all possible.
Then, ask the users to describe, as briefly as possible, the tasks they are trying to perform while accessing the collection. Together with the tasks, they should describe the navigation steps and interface operations (when not trivial) they took to achieve their goals. Then ask them to evaluate this sequence (adequate, inadequate), including both aspects. For those who feel the path is not adequate in some sense, or the interface inappropriate, they should give a brief explanation why. If they want, they may suggest an alternative they feel would be better suited for their task.
The results of this survey would give you a feel for how well the intended set of tasks (as reported by the users) is supported by the site. There are several things that can be done with this result; we have developed a method whereby one can take these scenarios descriptions and actually synthesize a navigation design for the site. But I believe this would be already outside the scope of the original request.
Adam C. Engst from TidBITS writes:
An on-the-spot survey regarding the usability of each section of the site might be useful. In essence, provide a means for users to rank each section of the site within that section rather than requiring them to complete a survey that's external to the content of the site. That takes advantage of the information being fresh in the users' minds, plus lowers the barriers to starting yet another task (the external survey). Obviously, the questions would have to be few in number and short, but I could also imagine an option for users to continue on with a more complete survey if they so desired.
Ivan Handler from Automated Concepts, Inc. writes:
I specialize in deploying "Knowledge Management" systems to industry. So far, these are primarily variations on document management which is really the service you are offering. In my experience with users both on and off the web, the most difficult issue to bridge is getting usable feedback from users. The primary reason for this is that when a user is accessing the document archive, it is because they have work to do. They get some kind of reward for their work (even if it is not getting screamed at , or not getting screamed at as much), what do they get by taking the time to fill out a "usability" questionnaire?
At least in a business, you will get negative feedback and plenty of it if the system is unusable, since then people will not be able to do their work. A public access web site will more often get ignored rather than getting actual feedback. While positive feedback is nice, the negative feedback is important, those are the users that will probably not come back unless something is done.
My suggestion is based on the following ideas:
make it very easy to provide feedback,
provide some kind of feedback to the user (beside the automated thank you notes currently in vogue)
demonstrate to the users that their input matters by showing how it actually provokes change
quantify this system by measuring the number of different users who provide feedback and the number of actual changes that are made to your site as a result.
A way to do this is to provide a simple suggestion box form that has a link on each page. Let the users tell you in simple or not so simple terms how you could make it easier for them to improve the site. Allow them to identify themselves with their email address. Then have an improvements page where you can document what has been changed recently, what is coming down the pike, what is being considered and what was rejected and why. I think that putting the suggestions related to a change is also a good idea (after you make sure that the user is open to the idea, or at least will allow the suggestion to be published anonymously). Also, when you actually consider a suggestion and decide to act on it or not, then send a note to the user.
This kind of mechanism will take some time to have a noticeable effect. On the other hand, people who really care about this site will be drawn in to the site through this mechanism. They will spread the word and you should not only get more hits, you should get more involvement.
Tracking the number of suggestions per time period over time should give you a nice increasing curve (unless the site is so spectacular that nobody has any suggestions, unlikely in my mind). While a nice academic study may allow the designer to publish a paper, I do not see it having much useful affect on your site.
A "Friends of the Site" User Group
Bob Fabian from Robert Fabian Associates writes:
An alternative [to server stats] would be to use selective voting by users of the collection. An on-line "Friends of the Claude Pepper Library" could be established. Members should be chosen to represent the communities the Library is dedicated to serving. These "Friends" would have special access to the Library, and be encouraged to vote for those portions of the collection that should be first digitized.
The "Friends" would answer the question: How can the limited budget for digitizing be used to deliver the maximum value for the communities served by the Library?
That's the question that the Library should be asking. And the on-line "Friends of the Claude Pepper Library" would provide compelling answers to that question. They would also be the natural reference group from which to get "usability" feedback. They would be an obvious source for suggested improvements. And they could be used to review alternate site designs.
User Registration or Profiles
Linda G. Marsh from Compaq Computer Corporation writes:
The best way to gather qualitative feedback is to ask your users, and, in order to do so, you must know who they are. Once you know who they are, you can ask them a few simple questions, such as:
Why did you access our site?
Did you find what you needed?
What did you like about the site? (This is important. You don't want to fix something that's working.)
What did you dislike about the site?
How would you improve the site?
To satisfy my client's request to know who was using a Web-based course, we implemented a simple log in procedure. The procedure asked for a unique identifying number (e.g., employee badge, student ID or social security number) and the user's first and last name. Logging in was voluntary and users were allowed to bypass the login procedure if they wanted to. The log in procedure also explained why we were gathering data.
On subsequent log in attempts, users only entered the identifying number. We didn't do any user validation, but interestingly, most users entered their correct names and identifying numbers. Relatively few chose to bypass the log in procedure and few entered bogus ID's. Once we had the list of users, we contacted them individually to solicit input about the course.
You won't need to contact many users to learn what works about your site and what doesn't. After you've gathered enough data, you can discontinue the log in process.
The above paradigm is simple, can be implemented quickly, and will give you the data you need in a short time.
If you include a feedback option in your Web site, you probably won't get much information. Feedback pages are the equivalent of background noise on the Web.
In addition to these suggestions for the problem of collecting user feedback, I also received many insightful comments on related issues. The best are available as sidebars to avoid making the main article even longer:
Share this article: Twitter | LinkedIn | Google+ | Email