Thursday, November 24, 2016

FreeDOS 1.2 Release Candidate 2

We started FreeDOS in 1994 to create a free and open source version of DOS that anyone could use. We've been slow to make new releases, but DOS isn't exactly a moving target anymore. New versions of FreeDOS are mostly about updating the software and making FreeDOS more modern. We made our first Alpha release in 1994, and our first Beta in 1998. In 2006, we finally released FreeDOS 1.0, and updated to FreeDOS 1.1 in 2012. And all these years later, it's exciting to see so many people using FreeDOS in 2016.

If you follow my work on the FreeDOS Project, you should know that we are working towards a new release of FreeDOS. You should see the official FreeDOS 1.2 release on December 25, 2016.

We are almost ready for the new FreeDOS 1.2 release! Please help us to test this new version. Download the FreeDOS 1.2 RC2 ("Release Candidate 2") and try it out. If you already have an operating system on your computer (such as Linux or Windows) we recommend you install FreeDOS 1.2 RC2 in a PC emulator or "virtual machine." Report any issues to the freedos-devel email list.

You can download FreeDOS 1.2 RC2 from our Download page or at ibiblio.

Here's what you'll find:

  • Release notes
  • Changes from FreeDOS 1.1
  • FD12CD.iso (full installer CDROM) —If you have problems with this image, try FD12LGCY.iso
  • FD12FLOPPY.zip (boot floppy for CDROM)
  • FD12FULL.zip (full installer USB image)
  • FD12LITE.zip (minimal installer USB image)

Thanks to everyone in the FreeDOS Project for their work towards this new release! There are too many of you to recognize individually, but you have all helped enormously. Thank you!
The last time I posted about FreeDOS 1.2 RC1, several news outlets picked up the story from this blog. Feel free to link here again, but please also link to the official announcement at www.freedos.org or www.freedos.org/download.

Also, if you'd like an interview about the FreeDOS Project and our upcoming FreeDOS 1.2 release, you can email me at jhall@freedos.org.

Monday, October 31, 2016

FreeDOS 1.2 RC1

You may know that I am involved in many open source software projects. Aside from my usability work with GNOME, I am probably best known as the founder and project coordinator of the FreeDOS Project.

I started the FreeDOS Project back in 1994, when MS-DOS was still the platform of choice for many people. You can read the history of FreeDOS on our website, but the short version is this: I announced the FreeDOS Project in June 1994 as a way to replace the functionality of MS-DOS. The idea of a free DOS quickly caught on, and soon developers across the world came together to contribute to and improve FreeDOS.

It's 2016, and FreeDOS is still going strong. In fact, we are planning a new release this year: FreeDOS 1.2 should be available on December 25, 2016.

You can help us make FreeDOS 1.2 a reality! We just released the FreeDOS 1.2 RC1 ("Release Candidate 1") for folks to try out. Please download and test this latest version of FreeDOS! Report any bugs or problems to the freedos-devel email list.

You can get FreeDOS 1.2 RC1 here: www.freedos.org/download

Saturday, October 29, 2016

Solitaire in a Bash script

I like to write Bash scripts. It stems from my time as a Unix and Linux systems administrator, years ago. I used to automated everything. So I got very good at writing shell scripts. Even today when managing a personal server, I write Bash scripts to automate various jobs so I don't have to keep logging into the server all the time. For example, I have a job that parses an RSS news feed with Bash.

I admit that my Bash scripts aren't always for automation. Some of my scripts are just for fun. Like the Bash script to fill out my March Madness basketball brackets. It can be an interesting diversion to write a Bash script to do something innovative.

And lately, I've started writing another such Bash script. Let me tell you about it.

We all know the classic Klondike Solitaire card game. There have been countless computer implementations of Solitaire on every platform. We even had a simple Solitaire game on our old Apple IIe computer in the 1980s. If you run Linux, you may be familiar with AisleRiot, which supports multiple card solitaire games, including the classic Klondike Solitaire. More recently, Google now provides a browser-based version of Klondike Solitaire; just search for the term "solitaire" and you'll get an option to "Click to play" the web version.

I wanted to write my own version of Klondike Solitaire as a Bash script. Sure, I could grab another shell script implementation of Solitaire called Shellitaire but I liked the challenge of writing my own.

And I did. Or rather, I mostly did. I have run out of free time to work on it. So I'm sharing it here in case others want to build on it. I have implemented most of the game, except for the card selection. You might think that's the toughest part, but I don't think so; I'll explain at the end.

So, how do you write a solitaire card game in a shell script?

I found it was easiest to leverage the strength of shell scripts: files. I started with all 52 cards in a single "deck" file, shuffled it, then "drew" cards from the deck into piles on the tableau.

Let's start with the basics. Creating 52 cards, comprised of thirteen cards of four different suits, is straightforward:

for s in c d h s ; do
  for n in $(seq 1 13) ; do
    echo "$s$n"
  done
done

You can direct the output to a file, and you have a file containing an ordered representation of all 52 cards. On a Linux system, you can use GNU shuf(1) to randomize ("shuffle") an input file, which can also be a pipe. So to define a shuffled deck of 52 cards, you do this:

deck=/tmp/sol/deck

for s in c d h s ; do
  for n in $(seq 1 13) ; do
    echo "$s$n"
  done
done | shuf > $deck

Drawing cards from the deck requires a little more work, but not much more. I wrote a simple function popn() that takes ("pops") the first n lines of a file and returns those lines, and shortens the file at the same time. Usually, this will be one at a time, but we'll need the flexibility later.

On top of that function, I wrote another simple function drawcard() that draws a single card from the deck and assigns it to the end of a "drawn cards" pile. This function also needs a little extra logic to deal with an empty deck; if the deck is empty, we return the drawn cards to the deck and start over. This is why it's important to pop from the start of the deck and append drawn cards to the end of the "drawn cards" pile; you can easily reset the deck using the drawn cards:

function popn() {
  # pop the first n($2) items from a file($1) and print it
  head -$2 $1
  cp $1 /tmp/tempfile
  awk "NR>$2 {print}" /tmp/tempfile > $1
}

function drawcard() {
  if [ $(cat $deck | wc -l) -eq 0 ] ; then
    cp $draw $deck
    >$draw
  fi

  popn $deck 1 >> $draw
}

In Klondike Solitaire, the play area is a tableau of seven piles of cards, where the first n-1 cards are piled "face down" and the last card is placed "face up." So for the first column, there are no "face down" cards, and only one "face up" card. On the seventh column, you start with six "face down" cards, and only one "face up" card. The player must move cards on these seven piles, eventually transferring the cards to four separate "foundation" piles, where each pile is dedicated to a separate suit: clubs, diamonds, hearts, and spades.

Creating the play area requires the use of Bash arrays. I rarely use arrays in Bash, but they are a very useful feature. Bash supports both indexed arrays (0, 1, 2, …) and associative arrays (where the index is a string value). You define a variable as an indexed array using the declare -a directive, and as an associative array with the declare -A directive.

With this, it's simple enough to define the tableau card piles as separate files, then use the popn() function to deal cards from the deck into these files.

declare -a tabdraw
for n in $(seq 1 7) ; do tabdraw[$n]="/tmp/sol/tab.draw.$n" ; done

declare -a pile
for n in $(seq 1 7) ; do tab[$n]="/tmp/sol/tab.$n" ; done

declare -A found
for s in c d h s ; do found[$s]="/tmp/sol/found.$s" ; done

# deal cards from deck into tableau {curly brackets needed on array refs}

for n in $(seq 1 7) ; do popn $deck 1 >> ${tab[$n]} ; done

for n in $(seq 1 7) ; do popn $deck $((n - 1)) >> ${tabdraw[$n]} ; done

And that's the guts of a Solitaire game in Bash! When assembled, my Bash script looked like this:

#!/bin/bash

# create work directory

tmpdir="/tmp/sol.tmp.$RANDOM"
[ ! -d $tmpdir ] && mkdir $tmpdir

# create variables

deck="$tmpdir/deck"
draw="$tmpdir/draw"
tmpf="$tmpdir/tmpf"

declare -a tabdraw
for n in $(seq 1 7) ; do tabdraw[$n]="$tmpdir/tab.draw.$n" ; done

declare -a pile
for n in $(seq 1 7) ; do tab[$n]="$tmpdir/tab.$n" ; done

declare -A found
for s in c d h s ; do found[$s]="$tmpdir/found.$s" ; done

# create functions

function popn() {
  # pop the first n($2) items from a file($1) and print it
  head -$2 $1
  cp $1 $tmpf
  awk "NR>$2 {print}" $tmpf > $1
}

function drawcard() {
  if [ $(cat $deck | wc -l) -eq 0 ] ; then
    cp $draw $deck
    >$draw
  fi

  popn $deck 1 >> $draw
}

function showcards() {
  # show the cards again - repaint the screen

  clear

  for n in $(seq 1 7) ; do cat ${tabdraw[$n]} | wc -l > $tmpdir/wc.$n ; done

  paste $tmpdir/wc.[1-7]
  paste ${tab[*]}

  echo '___  ___  ___  ___'

  paste ${found[*]}

  echo '___'

  cat $deck | wc -l
  tail -1 $draw
}

# build and shuffle initial deck

for s in c d h s ; do
  echo "${s}0" > ${found[$s]}

  for n in $(seq 1 13) ; do
    echo "$s$n"
  done
done | shuf > $deck

# deal cards from deck into tableau {curly brackets needed on array refs}

for n in $(seq 1 7) ; do popn $deck 1 >> ${tab[$n]} ; done

for n in $(seq 1 7) ; do popn $deck $((n - 1)) >> ${tabdraw[$n]} ; done

# loop until quit ('q')

input='n'
while [ "$input" != "q" ] ; do
  case "$input" in
  'n')
    # draw next card
    drawcard
    ;;
  *)
    # parse into card1,card2
    # is this a valid request?
    # is this a valid move?
    ;;
  esac

  showcards

  echo -n '?> '
  read input
done

# cleanup

rm -rf $tmpdir

I have left unfinished the logic to move cards around on the tableau. This might seem like the most difficult part, but not really. Since every pile on the tableau is a file, it's easy enough to seek a requested card in each of the "face up" piles, then extract all lines (cards) from that "face up" pile and append them onto another "face up" pile. You'd need a little extra logic in there to move Kings to an empty space on the tableau, but that's not very difficult.

I envisioned splitting the logic into three parts:

1. Verifying that this is a valid request
For example, the user is not allowed to move a black card onto a black card. Also, the user can only move cards in descending order on the tableau, and ascending order on the foundation. That's easy logic.
2. Confirming that the cards are there to move
This should be a fairly straightforward process using fgrep(1) to locate the requested cards on the tableau or in the foundation.
3. Moving the cards
I believe this should be easy, if you remember that cards are only entries in files. You can easily write a function that outputs all lines starting with card A, and appends them to the end of another file. At the same time, the function can truncate the first file starting at card A.

When you start the script, you should see output similar to this:

0       1       2       3       4       5       6
s12     c11     s2      h7      s11     h11     d3
___     ___     ___     ___
c0      d0      h0      s0
___
23
c13
?> 

The output is intentionally functional, and uses paste(1) to merge the output from several tableau files. What you're seeing:
  • The first line shows the number of cards remaining in each of the tableau "face down" piles.
  • The second line shows the "face up" cards for each pile on the tableau. This sample output indicates: 12 (Queen) of Spades, 11 (Jack) of Clubs, 2 of Spades, 7 of Hearts, 11 (Jack) of Spades, 11 (Jack) of Hearts, and 3 of Diamonds.
  • The third line shows the empty foundation piles. I initialized these as the "zero" cards so the logic to transfer cards to the foundation could remain simple; you only move cards of the same suit in ascending order.
  • The fourth line shows the number of cards remaining to be drawn on the deck.
  • The fifth line shows the "face up" card from the deck. In this case, it is the 13 (King) of Clubs.
  • The last line shows a prompt (?>) for the user to enter a command.

This Bash script was a lot of fun to write, but I don't have time to finish it. The script took an afternoon to write, and an hour to tweak. Even so, I think this is a solid start to play Solitaire in a Bash script.

Feel free to finish this script. Please assume Creative Commons Attribution. So if you use it somewhere else, such as to write an article and publish it, you should credit me as the original author.

Sunday, October 2, 2016

Looking ahead: Usability of open source software

This Spring semester, I look forward to teaching CSCI 4609 Processes, Programming, and Languages: Usability of Open Source Software. This is the second time I will teach the class at the University of Minnesota Morris, although it's more like the fifth time because I structured the course outline to be very similar to the Outreachy internships I've mentored now for three cycles.

Interested in a preview of the course? Here's a quick breakdown of the syllabus:
CSCI 4609: Usability of Open Source Software

Introduction to usability studies and how users interact with systems using open source software as an example. Students learn usability methods, then explore and contribute to open source software by performing usability tests, presenting their analysis of these tests, and making suggestions or changes that may improve the usability.

Course objectives:
  • To understand what is usability and apply basic principles for how test usability
  • Design and develop personas, scenarios, and other artifacts for usability testing
  • Create an execute a usability test against an actual product
  • To identify and reflect on the value and presentation of usability test results

Each student will work with the professor and other students to choose an individual project to complete during the second half of the term.

Requirements:
  • Class engagement (discussions, presentations, feedback)
  • Projects (small group project, and larger individual project)
  • Final paper to document your individual project

Each discussion will be worth 5 points. This is graded on a scale: no points for no discussion posted, and 1 to 5 points based on the quality of your discussion. For example: a well thought-out discussion will be given 5 points; a sketched out discussion post will earn 1 point.

Course outline:
  1. Introduction
  2. What is usability?
  3. How do we test usability?
  4. Personas
  5. Scenarios
  6. Scenario tasks
  7. User interfaces
  8. Mini project (two weeks)
  9. Final project (four weeks)
  10. Final paper
Based on what I learned from teaching the class last time, I'll be sure to arrange the weeks to leave more time for the final project, and to spread discussion throughout the week. For example, in the first half of the course, there's a lot of research and practice: learn about a topic and post a summary, then apply what you have learned towards a specific assignment. This time, I'll have the first discussion assignment due around Thursday each week, and the practice assignment due on Sunday, assuming each week starts on a Monday and ends on Sunday night.

I will also change the points. Last time, I had a 60/40 split for discussion points and final paper. I totaled your discussion, and that was 60% of your grade; your final paper was the other 40%. This year, I plan to make the points cumulative. If you assume 20 discussions at 5 points each, that's 100 points for discussion. The paper is an additional 50 points. That makes it clear you cannot skip the discussion and hope for a strong paper; neither can you punt the paper and rely on your discussion points. You need to participate every week and you need to make an effort on the final paper to get a good grade in the class.
image: University of Minnesota Morris

Sunday, September 25, 2016

Great first year at LAS GNOME!

This was the first year of the Libre Application Summit, hosted by GNOME (aka "LAS GNOME"). Congratulations to the LAS GNOME team for a successful launch of this new conference! I hope to see more of them.

In case you missed LAS GNOME, the conference was in Portland, Oregon. I thoroughly enjoyed this very walkable city. Portland is a great place for a conference venue. When I booked my hotel, I found lots of hotel options within easy walking distance to the LAS GNOME location. I walked every day, but you could also take any of the many light rail or bus or trolley options running throughout the city.

I encourage you to review this year's conference schedule to learn about the different presentations. I'll share only a few highlights of my own. I also live-tweeted through many of the presentations, and I'll share some of those tweets here:


Alexander Larsson gave a great presentation ("Taking back the apps from the distributions") about Flatpak. I'd followed Flatpak before, but until Alexander's presentation, I never really grokked what Flatpak can do for us. With Flatpak, anyone can provide an application or app anywhere, on any distribution.


Today, you may be comfortable installing whatever application you need through your distribution. But what if your distribution doesn't provide the application you are looking for? And what if you want to run an application that a distribution isn't likely to want to provide (such as a commercial third-party office application or game)? In these cases, you either hope the person providing the application can provide a package for your distribution, or you go without.

With Flatpak, you can bundle an application to run on any Linux distribution, and it should just run. I see this as a huge opportunity for indie games on Linux! Imagine the next version of PuzzleQuest also being available for Linux, or a version of Worms for Linux, or a version of Inside for Linux. With Flatpak, these indie developers could bundle up their apps for Linux—and sell them.


Also, a great thing about going to conferences is the opportunity to finally meet people you've only chatted with online. Ciarrai was one of my interns during the May-August cycle of Outreachy, and we finally got to meet at LAS GNOME! We had been emailed back and forth in the weeks ahead of time, since we knew we'd both be there. But it was great to finally meet them in person!

LAS GNOME had scheduled talks in the morning, and "unconference" topics in the afternoon. With an "unconference," people suggest topics for an impromptu presentation or workshop, and attendees vote on them.

In the afternoon of day 1, I hosted an informal workshop on usability testing, at the same time my friend Asheesh gave an unconference presentation on web app packaging in Sandstorm. I was surprised to see so many developers at my how-to presentation about usability testing. Thanks to all who attended! Scott snapped this selfie of the two of us:


On day 2, Asheesh Laroia gave a presentation ("How to make open source web apps viable") about Sandstorm. Asheesh and I have known each other for a few years now, and I've followed his work with Sandstorm. But it was great to see Asheesh walk us through a demo of Sandstorm to really show off what it can do. In short, Sandstorm provides an open source software platform that helps people collaborate over the web.

And Sandstorm is designed with several layers of security, so it's very locked down. Even if you can "break out" of one web application, you can't get elsewhere on the system. As Asheesh put it during his talk, it's like Google Docs, but more secure:


On day 3, Stephano Cetola shared a wonderful story about how he got involved in open source software and made it his career ("Endless Summer of Code: Getting Involved in OSS"). I loved how Stephano talked about his experimental nature, and how he learned about technology by letting the "magic smoke" out of things.

Because of his eagerness and willingness to learn, Stephano found new opportunities to grow—eventually landing a position where he works for Intel, working on the Yocto project. A great journey that was a joy to experience!


Finally, I gave my presentation on usability testing for GNOME ("GNOME Usability Testing"). I opened by talking about different ways (direct and indirect) that you can test usability of software. From there, I gave an overview of the Outreachy internship, and what Renata, Ciarrai and Diana worked on as part of their internships.

I was thrilled for Ciarrai to share their part of usability testing from this cycle of Outreachy. Ciarrai's project was a paper prototype test of a new version of GNOME Settings. Renata conducted a traditional usability test of other ongoing work in GNOME, and Diana worked on a User eXperience (UX) test of GNOME.


I think my presentation went well, and we had a lot of great questions. Thanks to everyone!

Again, congratulations to the LAS GNOME team for a successful first year!
image: LAS GNOME
photo: Scott

Sunday, September 18, 2016

Headed to LAS GNOME!

By the time this gets posted on the blog, I will be headed to LAS GNOME. I'm really looking forward to being there!

I'm on the schedule to talk about usability testing. Specifically, I'll discuss how you can do usability testing for your own open source software projects. Maybe you think usability testing is hard—it's not! Anyone can do usability testing! It only takes a little prep work and about five testers to get enough useful feedback that you can improve your interface.

As part of my presentation, I'll use our usability tests from this summer's GNOME usability tests as examples. Diana, Renata and Ciarrai did a great job on their usability testing.

These usability testing projects also demonstrate three different methods you can use to examine usability in software:

Ciarrai did a paper prototype test of the new GNOME Settings. A paper prototype is a useful test if you don't have the user interface nailed down yet. The paper prototype test allows you to examine how real people would respond to the interface while it's still mocked up on paper. If you have a demo version of the software, you can do a variation of this called an "animated prototype" that looks a lot like a traditional usability test.

Renata performed a traditional usability test of other areas of ongoing development in GNOME. This is the kind of test most people think of when they hear about usability testing. In a traditional usability test, you ask testers to perform various tasks ("scenario tasks") while you observe what they do and take notes of what they say or how they attempt to complete the tasks. This is a very powerful test that can provide great insights to how users approach your software.

Diana led a user experience (UX) test of users who used GNOME for the first time. User experience is technically different from usability. Where usability is about how easily real people can accomplish real tasks using the software, user experience focuses more on the emotional reaction these people have when using the software. Usability and user experience are often confused with each other, but they are separate concepts.

I hope to see you at the conference!
Interested in my slides from LAS GNOME? You can find them on my personal website at www.freedos.org/jhall/uploads/. I'll keep them there for at least the next few weeks.
image: LAS GNOME

Saturday, September 10, 2016

Wrap-up from this cycle of Outreachy

Now that all interns have completed their work, I wanted to share a few final thoughts about this cycle of Outreachy. Hopefully, this post will also help us in future usability testing.

This was my third time mentoring for Outreachy, but my first time with more than one intern at a time. As in previous cycles, I worked with GNOME to do usability testing. Allan Day and Jakub Steiner from the GNOME Design Team also pitched in with comments and advice to the interns when they were working on their tests and analysis.

As before, I structured the usability testing internship along similar lines to an online class. In fact, I borrowed from the usability testing internship schedule when I taught CSCI 4609 Processes, Programming, and Languages: Usability of Open Source Software, an online elective computer science program at the University of Minnesota Morris. We first learned about usability testing, then we practiced usability testing. In this way, the internship was broken up into two phases: a research phase and a project phase.

Projects

With three interns, we decided from the beginning that we should focus on three different areas for our usability tests. In previous cycles, with only one intern at a time, we conducted traditional usability tests that spanned several common design patterns used in GNOME. In this cycle, we wanted to continue with a traditional usability test, but we wanted to explore other areas of GNOME usability.

Based on the previous experience and interests of Renata, Ciarrai and Diana, we assigned them to do these three usability tests:


  1. Renata: A traditional usability test of ongoing development in GNOME.
  2. Ciarrai: A paper prototype test of the new GNOME Settings.
  3. Diana: A first-time GNOME User eXperience (UX) test.


A paper prototype test uses a slightly different test design from a traditional usability test, but the analysis is the same. In a paper prototype test, as in a traditional usability test, you ask a tester to respond to various scenario tasks. But in a paper prototype test, the tester is acting against a paper mock-up of the design. In this case, Ciarrai used screenshot mock-ups of the updated GNOME Settings.

Renata led a traditional usability test. In her test, we asked Renata to further examine areas that had improved based on the last cycle of GNOME usability testing. This allowed us to measure any usability gains in the design.

Diana performed a different kind of test. Technically, a user experience test is not a usability test; usability and user experience are different concepts. Usability is how easily users can use the software to accomplish real tasks in a reasonable amount of time. But user experience is more about the emotional engagement of the user when using the software. It's possible for a program to have good usability and poor UX, or for a program to have poor usability but good UX, but in most cases you will find a positive correlation between usability and user experience. Most of the time, if a program has good usability, it will likely have positive UX. And if a program has poor usability, it will likely have negative UX.

Process

When planning any usability test, it's important to start with an understanding of who uses the software. Typically, a project should have these documented as personas, fake personalities that reflect real-world users. With personas, a project can discuss design changes in a more concrete way. Rather than a designer or developer planning a change "because it's cool" or "because I like it," they can plan changes that will help the "Sally" persona do her work, or will make the "Stephen" persona more productive, or will allow the "Chris" persona to more easily work with the software.

Once you understand the users, you need to define how the users will use the software. These are the use scenarios. Users might have different needs that carry different expectations. For a word processor program, one user might use the program to write a report at work, while another user might use the program to write a research paper (with footnotes!) for class, while another user might use the program to jot a quick letter to a friend.

From this understanding of use scenarios, you can then create a usability test. In most usability tests, the test is usually comprised of scenario tasks, which you give one at a time to each tester. The scenario task should set up a brief context for the task, then ask the tester to do something specific. The task needs to be general enough that the tester has latitude to complete the task the way they would normally do it, yet specific enough that both the test moderator and test volunteer would know when the task is completed.

Take notes while your testers use the software. This is usually made easier if you ask each tester to speak aloud what they are doing or what they are looking for (if they are looking for a Print button, they should say "I need to print, and I'm looking for a Print button"). Also note any comments they make while working on each task ("This seems difficult" or "I can't seem to find the Print button" or "That was easy").

And that's the process Diana, Renata and Ciarrai followed when they did their tests. Diana's UX test was a little different, because UX is not the same as usability so required a slightly different test design, but still she asked testers to explore GNOME via three broad scenario tasks.

After the test, you collate your notes and analyze your data. For a traditional usability test, I find it is easiest to start with a heat map. I discuss heat maps elsewhere on my blog. From the heat map, it's easy to look for "cool" and "hot" rows, which reflect scenario tasks that were easy for testers or more difficult. This is often my first step to examine themes.

Both Renata and Ciarrai used heat maps in their analysis. Ciarrai's heat map analysis method was a variant on the typical method, due to their test being a paper prototype test. The testers didn't interact with a live version of the software, but a paper mock-up using screenshots of the new design. So Ciarrai's analysis was made more difficult, but they did a great job.

For Diana's test, I asked her to try a different analysis method. After each tester explored GNOME, she asked them to describe their initial reaction (their first few minutes) to GNOME by pointing to an emoji. Diana had prepared a list of ten emojis that ranged from "ewww" to "happy." She also asked testers to describe their experience at the end of the test using the same emoji scale. Diana then followed up with a brief interview designed to explore how testers perceived GNOME.

Notes on the UX test for next time:

I have mentored several traditional usability test before, as part of Outreachy and as an instructor for CSCI 4609, so we had a lot of prior experience going into Renata's traditional usability test and Ciarrai's paper prototype test. That's probably why both went very well. But this was our first time doing a user experience test, and while I think we got some useful results from it, I think we can improve the user experience test the next time we do a UX test.

First, I think we need more testers in the user experience test. In most usability tests, you don't need very many testers to uncover most problems and get actionable results. This assumes that each tester will uncover many of the same problems found by the next tester. Some usability researchers claim you only need to test with five testers. I agree with that, but you need to remember the assumption: each tester uncovers many of the same problems the next tester will find. Nielsen finds a single tester will uncover about 31% of the problems in a single usability test.

But in UX testing, you aren't uncovering usability problems. Instead, you are examining how real people respond to the software. You are looking for something akin to emotional attachment, not ease of use. So five testers is not enough. Diana initially included five testers for her UX test, and later expanded to seven. Even that isn't enough. I think the next time we do a GNOME user experience test, we will want around twenty testers.

We should also closely examine the number of emoji that testers respond to. I worked with Diana to select ten emoji that span a range of emotions. I based this test design on a common contemporary UX test design, including that used in an article in the Washington Post where the newspaper asked readers to respond to political events (specifically, the Republican National Convention) by selecting from a list of emoji. Over a thousand readers responded in that case. With such great numbers, the results quickly narrowed down to show the top one or two emoji responses for each political moment. For example, when responding to "I declare Trump and Pence to be nominees for President and Vice President," the responses tailed off quickly: 553, 334, 90, 79, 70, … That makes it easy to identify the one or two dominant reactions.

The next time we do a user experience test, increasing the testers will help us to consistently identify emotional response via emoji. We might also reduce the number of emoji, so fewer testers have fewer options to choose from. At the same time, we need to be careful not to restrict the responses too much or we risk artificial results.

To offset this, and in addition to emoji, we might also ask the testers to pick three or five adjectives from a list that describe their reaction to the test. We can use synonyms of various emotions, so testers have a wide range of words with which to describe their reactions, yet we can combine results more easily. For example, synonyms of "confused" include baffled, bewildered, befuddled and dazed. If a tester felt confused at the beginning of the UX test, they might pick three adjectives "baffled, bewildered, confused" that would give a score of +3 to "confused." Similarly, synonyms of "happy" include cheerful, contented, delighted and ecstatic.

A few thoughts for future mentors:

Mentoring in Outreachy is a very rewarding experience, but it's also a lot of work.

When mentoring three interns in this cycle in Outreachy, I assumed the effort would be about the same as my online class with ten students. The research phase was certainly straightforward; we were researching the same topics. But once we entered the project phase, and Diana, Renata and Ciarrai each started work on their own projects, I was constantly switching gears as I reviewed each intern's work and provided direction and guidance. I think I did a good job here, but it came with a ton of effort on my end.

As weeks progressed in the project phase, I began to rely more heavily on Jakub and Allan to help me out. I seemed to constantly send them reminders to review that week's work and respond to the interns. I hope I haven't made a nuisance of myself or pushed too hard.

Looking back, mentoring three interns on three different usability tests was more work than my online class with ten students because the three interns were working on completely different projects, while students in my CSCI 4609 class worked on similar projects. In my online class, students had different starting points in their Personas and Use Scenarios, but they worked together on the same program for a "dry run" usability test. For the final project, students could pick any open source program that appealed to them, but in the "dry run" I picked the program for them to examine (based on common interests we explored in the first few weeks of class). This meant most of my energies as professor were pointed in the same direction. But in mentoring three projects for Outreachy, I was always getting pulled in a different direction.

I'm not sure how my experience compares to a project that focuses on coding. At a guess, I think it's not very comparable. Your mileage may vary.

My guess is the effort to mentor several interns in a coding project is probably the same as being the project maintainer for any active open source software project. I've been involved in open source software since 1993, and an active developer since 1994, so I know the effort isn't too great if you are reviewing code patches for a codebase that you are very familiar with. As maintainer of the program, you know where the puzzle pieces go, and you should have a good idea what the side effects may be for a new patch.

If you volunteer as a mentor to a future cycle in Outreachy, make sure to plan your time very carefully. Be aware of time commitments and how your work in Outreachy will intersect with other things you want to do. I over-committed myself, for example. I can't do that again.

A few thoughts for future interns:

One of the steps in your application to Outreachy is the first contribution. For the usability testing internship, I asked applicants to pick a few scenario tasks from previous usability tests, and do a usability test with a few test volunteers, then write a short article about your first contribution. I intentionally asked applicants to test with only a few users, usually less than five, because I wanted them to see what usability testing was like before they committed to the internship. Three isn't enough testers to get actionable results, so I think I will increase this to five testers next time.

I find that this first contribution really does provide an indicator to later performance. Applicants who did well in their initial contribution did well in their internship. We were able to predict performance (to some degree) based on their initial contribution. Diana, Ciarrai and Renata were accepted into the program because their initial contributions represented good work. Other applicants to the program who weren't selected may have done a good job but were not selected due to space, while others didn't seem to take the first contribution very seriously (they thought it was "busywork") or they waited until the last moment to do their initial contribution, leaving no time to evaluate their work before we had to make a decision.

If you plan to apply for a future cycle in Outreachy, don't forget the initial contribution. Take it seriously. Reach out to the project's mentors and discuss the initial contribution and how to approach it. I know I took into consideration each applicant's engagement and relative success in the initial contribution, and that mattered when selecting interns for the program.
image: Outreachy