Usability testing: a case

 

This project is an example of my design process when I conduct usability testing, data analysis and reporting. I am unable to show corporate intel from the companies where I've worked in the past; instead I have decided to write in more general terms about my workflow and therefore this project will follow a blog post format. An important note, I don't claim this is the best way to conduct usability testing, nor the only way, since there are many different methodologies out there. This is what I've found to be effective, economic and useful over the course of time under my working conditions, and I think it's worth sharing.


I've had the privilege to work with companies that allow me to test early and often, at almost every step of the development process. Usually the "methods" I conduct are embedded within a larger –design– framework that I like to call the "Product Development Roadmap". Any given method is usually composed of these phases:

 

1. Establishing goals and instructing the team:

It's always necessary to establish the goals of the evaluation, or to have referential metrics from previous tests to compare the usability of the product. Additionally, as a UX Designer I've been faced with explaining to the different stakeholders how a method works and how it's conducted. For all these purposes and more I usually create a small presentation that is followed by a discussion with the team:

 A generic introduction for when discussing a usability test.

A generic introduction for when discussing a usability test.

The general structure for this guideline is usually the following:

  1. Introduction
  2. Description of the test or tests to be conducted
  3. Goals of the test
  4. Detailed description of the participants (may need info from other departments)
  5. Tasks (if performing a typical usability test)
  6. Metrics to be recorded and evaluated
  7. Next steps (usually referring to analysis and results)
 Another slide in the presentation. This one about metrics to be recorded.

Another slide in the presentation. This one about metrics to be recorded.

There's a range of different tests at our disposal. Some of the most common ones can be found at NielsenNormanMethodKit, the mediaLabAmsterdam and at Stanford's D School (more design thinking strategies in the latter than actual usability test alternatives). Tests can range from the deceivingly simple paper prototypes, to fully clickable mockups. Other similar research methods include creating empathy maps and co-discovery activities.

 

2. Conducting the test:

There's many ways to conduct a test. I've participated in on-site, remote, web, mobile and eye-tracking testing. There's many technologies that allow you to monitor and record the sessions. I personally prefer testing on-site because I can capture gestures, nuances and details that escape a video recording, but remote testing is flexible, potentially less expensive and can be performed on a more representative user sample.

Some of the digital tools I have used to conduct tests include InVision, Inspectlet, LookBack, and OptimalWorkshop, among others. 

 A series of layouts in InVision used to test information architecture (content blurred intentionally).

A series of layouts in InVision used to test information architecture (content blurred intentionally).

3. Analyzing data, reporting and making informed decisions:

Ultimately, the goal of tests and UX research is to inform the product design decisions, to understand our users and to create more accurate solutions. Usually after a usability test, a report is drawn with a summary of the results, highlights and recommendations.