Today’s digital world is a highly competitive landscape. Making good, informed decisions is key to delivering successful products or services on time and budget in order to stay relevant. User satisfaction comes from interacting with an intuitive, well-designed app that accomplishes their needs easily and quickly. The success of any digital product is measured by the satisfaction of the people using it, and is honed through constant feedback and iterations. And to validate the effectiveness of the interface, UX & Strategy teams rely on Usability Studies.
Benefits of Usability Testing
Presenting a prototype of our product to representative users enables us to gather quantitative and qualitative data. Usability testing helps us validate that user expectations have been met and provides valuable feedback based on observation and behavior (because there’s a big difference between what users say and what they actually do). This provide an opportunity to engineer-out the flaws, avoid uncertainty and internal fighting and minimize the risk of building the wrong thing, saving time, money and other resources.
The end result of Usability Testing is not statistical validity (the outcome of quantitative research) but the verification of insights and assumptions, based on behavioral observation (the outcome of qualitative research).
Creating Effective Task Scenarios
A Usability Study involves 4 key steps:
- Develop the Test Plan
- Run the test
- Analyze the results
- Implement the feedback to your design, and repeat.
Fundamental to each step are Task Scenarios: the “activities” you request the user to do when testing the interface.
Consider task scenarios as a goal within a context. Instead of simply telling your participants what to do, provide context on why they’re doing it. This helps engage participants by making the tasks relatable and easier to understand.
Consider the user
Prior to testing, you should have already defined a set of Personas or Proto-Personas to test with. The best results are achieved by testing with five potential users who match your defined criteria.
Make the tasks realistic.
Ask participants to perform tasks they’re familiar with. For example, if you want to test an app that lets users schedule daycare for babies, test it with someone who has small children and has previously used such services.
Make the tasks actionable.
Formulate tasks in a way that inspires participants to act, not talk. Asking “how would you do” a task encourages the user to answer with words. Instead, your task should user imperative sentences that instruct the user on what to do. For example: “Find the closest health center to your current location.”
Avoid revealing the answers.
Tasks should be clear enough for participants to know what they need to accomplish, but ambiguous enough to not describe steps or give away clues on how to do it. For example, if your interface displays the word “promotions” in your options menu, avoid using this word in your task instructions.
Formulating Your Task Scenarios
Well-formed task scenarios yield smoother tests. Describe the situation and the task for participants by using the formula: You + Motivation + Action.
Motivation = Why the user does X
Action = What the user needs to accomplish
You [are interested in buying a jacket while saving some money] + [Find a jacket discounted at least 30% and save it to your cart].
Sequencing Your Task Scenarios
When running your test, arrange the scenarios from easy to difficult. Start with an introduction of how they got to your site or application. Then proceed with an orientation task—something easy like “create a new account,” that allows the participant to feel comfortable. Then continue with the next task.
Remember, your test should be no longer than 1 hour, so try to present 5 – 8 tasks to participants. Consider testing the 20% of your interface that provides 80% of the value to users.
Tips for Running Your Test & Capturing Findings
Be sure to include two key roles when running your test:
A Moderator who presents and guides users through the tasks while reminding them to think-aloud during the session to externalize thoughts and feelings. Also reassure participants that you are testing the interfaces, and not them.
Some Observers who take notes in another room to capture feedback and insights gathered in the sessions.
With your tasks defined, create a chart where you map the notes taken by observers, based on participant and task. Fill the board with your findings, and assign color-codes for a better overview. Capture positive findings (blue), errors, sources of confusion and frustration (red), as well as neutral findings such quotes and recommendations (yellow).
Analyze Results and Prioritize Findings
Once you’ve concluded your Usability Testing sessions, a good way to centralize and prioritize the findings is to review and de-duplicate your notes; enter the list of items in an Excel file to track the notes from each task, and list each by participant.
In addition, rank the severity of each item as shown:
- Major issue (an error that completely prevented the participant from finishing the task).
- Minor issue (confusions or mistakes that were corrected).
- Suggestions (improvements recommended by the user).
The frequency of the errors combined with their severity helps us to identify the most critical areas to focus on in the next iteration.
Well-formulated task scenarios create an easy-to-follow structure for your research team to plan and run usability testing—and helps participants feel comfortable and better engage with the study by presenting a series of tasks they can relate to.
This allows the team to gather valuable feedback based on real user behaviors, helping to validate assumptions and create a clear plan of action for improving the designs. For more information on how Anexinet can help your enterprise organization produce game-changing digital experiences for employees, customers and end users, please check out our menu of Enterprise Mobility kickstarts now.