Tuesday, August 16, 2016

Analyzing usability test results

If you recall the schedule for this cycle of Outreachy, we are nearing the end of our three usability tests. Diana, Ciarrai and Renata have completed or are nearing completion of their usability tests, and starting to work on their analysis. As we near the end of their work, I wanted to share a brief status, and a recap of my suggestions for their analysis:

Diana
First-time user experience test
"I have conducted user experience tests with 4 people so far, and have two more tests appointed for Tuesday. After the tests I'll be able to write about my first impressions and start working on the analysis!"

It will be interesting to focus first on the "activity" of the users. What tasks did they do during their "warm up" period? Did everyone do the scenario tasks you set them, or did they do something else? Did they have difficulty?

Follow up with an analysis of the testers' "engagement." Generate a quick histogram of the emoji responses? You don't have to create a chart for this. You can use the emoji-with-counts method that The Washington Post used in their article.

Ciarrai
Paper prototype test of new Settings application
"I have one test left to do, I've had some cancellations/reschedules, and then I will start on my analysis."

Even though this is a paper prototype test, I think the analysis method will be similar to a traditional usability test. See below.

Renata
Traditional usability test of other areas of GNOME development
"I have completed the tests, it went great overall! I have already started with the analysis. I am planning to write a brief item on my blog with my first impressions on Tuesday and then post the analysis on Saturday or Sunday."

For this analysis, I find it's best to do the analysis in two parts: a heat map, and a discussion of themes.

To create your heat map, use the method I describe on how to create a heat map.

Starting with the heat map, themes may become obvious. Look for "hot" rows (lots of black, red or orange). "Cool" rows with lots of green or yellow are likely to be areas with good usability. Pull quotes from users during the tests to give examples of problems in "hot" rows. What was common between these "hot" rows? Were users confused by the same or similar design patterns in these difficult areas? Also look at what went well: What was common between the "cool" rows? Why did users find these easy to do?
image: Outreachy

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.