Editor’s note: This is a guest post by Jaci Paige Wilkinson.
Librarians are consummate teachers, helpers, and cheerleaders. We might glow at the reference desk when a patron walks away with that perfect article or a new search strategy. Or we fist pump when a student e-mails us at 7pm on a Friday to ask for help identifying the composition date of J.S. Bach’s BWV 433. But when we lead usability testing that urge to be helpful must be resisted for the sake of recording accurate user behavior (Krug, 2000). We won’t be there, after all, to help the user when they’re using our website for their own purposes.
What about when a participant gets something wrong or gets stuck? What about a nudge? What about a hint? No matter how much the participant struggles, it’s crucial for both the testing process and the resulting data that we navigate these “pain points” with care and restraint. This is particularly tricky in non-lab, lightweight testing scenarios. If you have only 10-30 minutes with a participant or you’re in an informal setting, you, as the facilitator, are less likely to have the tools or the time to probe an unusual behavior or a pain point (Travis, 2014). However, pain points, even the non-completion of a task, provide insight. Librarians moderating usability testing must carefully navigate these moments to maximize the useful data they provide.
How should we move the test forward without helping but also without hindering a participant’s natural process? If the test in question is a concurrent think-aloud protocol, you, as the test moderator, are probably used to reminding participants to think out loud while they complete the test. Those reminders sound like “What are you doing now?”, “What was that you just did?”, or “Why did you do that?”. Drawing from moderator cues used in think aloud protocols, this article explains four tips to optimize computer-based usability testing in those moments when a participant’s activity slows, or slams, to a halt.
There are two main ways for the tips described below to come into play. Either the participant specifically asks for help or you intervene because of a lack of progress. The first case is easy because a participant self-identified as experiencing a pain point. In the second case, identify indicators that this participant is not moving forward or they are stalling: they stay on one page for a period of time or they keep pressing the back button. One frequently observed behavior that I never interfere with is when a participant repeats a step or click-path even when it didn’t work the first time. This is a very important observation for two reasons: first, does the participant realize that they have already done this? If so, why does the participant think this will work the second time? Observe as many useful behaviors as possible before stepping in. When you do step in, use these tips in this order:
ASK a participant to reflect on what they’ve done so far!
Get your participant talking about where they started and how they got here. You can be as blunt as: “OK, tell me what you’re looking at and why you think it is wrong”. This particular tip has the potential to yield valuable insights. What did the participant THINK they were going to see on the page and now what do they think this page is? When you look at this data later, consider what it says about the architecture and language of the pages this participant used. For instance, why did she think the library hours would be on “About” page?
Notice that nowhere have I mentioned using the back button or returning to the start page of the task. This is usually the ideal course of action; once a user goes backwards through his/her clickpath he/she can make some new decisions. But this idea should come from the user, not from you. Avoid using language that hints at a specific direction such as “Why don’t you back up a couple of steps?” This sort of comment is more of a prompt for action than reflection.
Read the question or prompt again! Then ask the participant to pick out key words in what you read that might help them think of different ways to conquer the task at hand.
“I see you’re having some trouble thinking of where to go next. Stop for one moment and listen to me read the question again”. An immediate diagnosis of this problem is that there was jargon in the script that misdirected the participant. Could the participant’s confusion about where to find the “religion department library liaison” be partially due to that fact that he had never heard of a “department library liaison” before? Letting the participant hear the prompt for a second or third time might allow him to connect language on the website with language in the prompt. If repetition doesn’t help, you can even ask the participant to name some of the important words in the prompt.
Another way to assist a participant with the prompt is to provide him with his own script. You can also ask him to read each task or question out loud: in usability testing, it has been observed that this direction “actually encouraged the “think aloud” process” that is frequently used” (Battleson et al., 2001). The think aloud process and its “additional cognitive activity changes the sequence of mediating thoughts. Instructions to explain and describe the content of thought are reliably associated with changes in ability to solve problems correctly” (Ericsson & Simon, 1993). Reading the prompt on a piece of paper with his own eyes, especially in combination with hearing you speak the prompt out loud, gives the participant multiple ways to process the information.
Choose a Point of No Return and don’t treat it as a failure.
Don’t let an uncompleted or unsuccessful task tank your overall test. Wandering off with the participant will turn the pace sluggish and reduce the participant’s morale. Choose a point of no return. Have an encouraging phrase at ready: “Great! We can stop here, that was really helpful. Now let’s move on to the next question”. There is an honesty to that phrasing: you demonstrate to your participant that what he is doing, even if he doesn’t think it is “right” is still helpful. It is an unproductive use of your time, and his, to let him continue if you aren’t collecting any more valuable data in the process. The attitude cultivated at a non-completed task or pain point will definitely impact performance and morale for subsequent tasks.
Include a question at the end to allow the participant to share comments or feelings felt throughout the test.
This is a tricky and potentially controversial suggestion. In usability testing and user experience, the distinction between studying use instead of opinion is crucial. We seek to observe user behavior, not collect their feedback. That’s why we scoff at market research and regard focus groups suspiciously (Nielsen, 1999). However, I still recommend ending a usability test with a question like “Is there anything else you’d like to tell us about your experience today?” or “Do you have any questions or further comments or observations about the tasks you just completed?” I ask it specifically because if there was one or more pain points in the course of a test, a participant will likely remember it. This gives her the space to give you more interesting data and, like with tip number three, this final question cultivates positive morale between you and the participant. She will leave your testing location feeling valued and listened to.
As a librarian, I know you were trained to help, empathize, and cultivate knowledge in library users. But usability testing is not the same as a shift at the research help desk! Steel your heart for the sake of collecting wonderfully useful data that will improve your library’s resources and services. Those pain points and unfinished tasks are solid gold. Remember, too, that you aren’t asking a participant to “go negative” on the interface (Wilson, 2010) or manufacture failure, you are interested in recording the most accurate user experience possible and understanding the behavior behind it. Use these tips, if not word for word, then at least to meditate on the environment you curate when conducting usability testing and how to optimize data collection.
Battleson, B., Booth, A., & Weintrop, J. (2001). Usability testing of an academic library web site: a case study. The Journal of Academic Librarianship, 27(3), 188-198.
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis.
Travis, David “5 Provocative Views on Usability Testing” User Focus 12 October 2014. <http://www.userfocus.co.uk/articles/5-provocative-views.html>
Nielsen, Jakob. “Voodoo Usability” Nielsen Norman Group 12 December 1999. <https://www.nngroup.com/articles/voodoo-usability/>
Wilson, Michael. “Encouraging Negative Feedback During User Testing” UX Booth 25 May 2010. <http://www.uxbooth.com/articles/encouraging-negative-feedback-during-user-testing/>