There's nothing like watching real people do real tasks with your software. You can learn a lot about how people interact with the software, what paths they take to accomplish goals, where they find the software easy to use, and where they get frustrated. Normally we do usability testing with scenario tasks, presented one at a time. But in this usability test, they asked testers to complete a series of "missions." Each "mission" was a set of two of more goals. For example:
Mission A.1 — Download and rename file in Nautilus
- Download a file from the web, a PDF document for example.
- Open the folder in which the file has been downloaded.
- Rename the dowloaded file to
- Toggle the browser window to full screen.
- Open the file
- Go back to the File manager.
- Close the file
Mission A.2 — Manipulate folders in Nautilus
- Create a new folder named
catsin your user directory.
- Create a new folder named
to doin your user directory.
- Move the
catsfolder to the
- Delete the
These "missions" take the place of scenario tasks. My suggestion to the usability testing team would be to add a brief context that "sets the stage" for each "mission." In my experience, that helps testers get settled into the task. This may have been part of the introduction they used for the overall usability test, but generally I like to see a brief context for each scenario task.
The usability test results also includes a heat map, to help identify any problem areas. I've talked about the Heat Map Method before (see also “It’s about the user: Applying usability in open source software.” Jim Hall. Linux Journal, print, December 2013). The heat map shows your usability test results in a neat grid, coded by different colors that represent increasing difficulty:
- —Green if the tester didn't have any problems completing the task.
- —Yellow if the tester encountered a few problems, but generally it was pretty smooth.
- —Orange if the tester experienced some difficulty in completing the task.
- —Red if the tester had a really hard time with the task.
- —Black if the task was too difficult and the tester gave up.
The colors borrow from the familiar green-yellow-red color scheme used in traffic signals, and which most people can associate with easy-medium-hard. The colors also suggest greater levels of "heat," from green (easy) to red (very hard) and black (too hard).
To build a heat map, arrange your usability test scenario tasks in rows, and your testers in columns. This provides a colorful grid. You can look across rows and look for "hot" rows (lots of black, red and orange) and "cool" rows (lots of green, with some yellow). Focus on the hot rows; these are where testers struggled the most.
Intrigeri's heat map suggests some issues with B1 (install and remove a package), C2 (temporary files) and C3 (change default video player). There's some difficulty with A3 (create a bookmark in Nautilus) and C4 (add and remove world clocks), but these seem secondary. Certainly these are issues to address, but the results suggest to focus on B1, C2 and C3 first.
For more, including observations and discussion, go read Intrigeri's article.