Usability testing
Qualitative research methods
Last updated
Qualitative research methods
Last updated
The staple in product designers and researchers method toolkit. It can be used in all different research levels but I would say it's most widely used in
To evaluate the usability of a conceptual design, evaluate concepts against each other to understand which one provides the superior experience, evaluate an existing design or product to identify improvement opportunities.
Golden rule says five testers but in reality it often requires more people as in order for it to be representative you most likely need to target multiple users within each user/customer segment/group to see trends.
This is where you are facilitating the session and is present in the room (or in the remote call) with the tester. It can be performed in person or remotely.
From my experience, I would say is it's easier to recruit people to remote usability testing as participants won't have to take as much time out of their day to participate since they don't have to physically travel anywhere. Therefore it's also comes to a lower cost since you can offer a lower incentive to participate in the tests.
In person testing however, often sparks deeper conversation as it's easier to get participants to warm up and does, in my opinion, provide more qualitative outcomes. One major advantage to in person testing is body language, it's subtle cues tells you a lot about how the other person is reacting to your questions and what their attitude is to the product you're evaluating.
In person testing does however come with a higher cost as it needs to cover travel and more time away from work/free time. It also means higher effort from you in organising the research but it's a great tool to engage stakeholders about practicing the design process as it "makes a day out of it".
This is where you typically use a software like User Zoom, or User Testing to perform remote, unmoderated usability tests*. It's great for quick insights but does however come to the cost of not being able to ask follow up questions and really get to the bottom of testers behaviours and attitudes.
It takes some times to get the hang of unmoderated usability testing as you need to be very thoughtful when constructing the tests so the testers don't get overwhelmed by the instructions and questions, but while you still get value from testing.
I'd also like to point out that some products are more suitable for unmoderated usability testing while others are not suited for it. E-commerce apps or websites, streaming services, and social media platforms are some examples of products that tend to be more suited for unmoderated testing (than other app/websites/services). This is due to the fact that the general public (which are on these platforms) will have an easier time to relate to these mainstream products since they probably already have a relationship to similar products.
*Many user research platforms also offer moderated remote testing.
So which one should you opt for? If I would generalise I'd say, if you're doing usability testing on a strategic level go for moderated, in person tests if circumstances allo and moderated, remote tests if not. On this level you are in need of understanding underlaying motivators to behaviours and attitudes and you simply cannot ask the so important on the spot follow up questions that you need to do when at this level.
For tactical research both moderated and unmoderated works, depending on the scope of the concept and complexity of the product. Some products, like stated previously, are more suited for unmoderated testing than others. If you are evaluating an enterprise software then unmoderated testing most likely isn't for you as for you to get valuable insights you need to target a specific set of people. However, if you're evaluating an e-commerce app or accommodation booking service, then unmoderated testing can be a great option. If you have a big scope and concept to evaluate though, I'd say go for moderated testing. It will allow you to ask follow up questions on the spot that you simply cannot do in unmoderated testing.
For operational research I'd say the same as for tactical research, that it's the type of product or service you evaluate that decides if unmoderated testing is suitable. Otherwise this is a great tool for everyday, quick n dirty research that you do when working on an operational level.
I'm all for plans and structured work, as long as it doesn't hinder to work of actually getting the insights. Don't get me wrong, it's good to have a plan, but sometimes research plans get too detailed and into the nitty gritty that testing get too cumbersome and is instead never done. Start small and build your way up until you reach a level that suits you.
Is usability testing suitable for what you're researching? Think about what you want to achieve and if UT is the best method for it.
Purpose - why are you doing this? Define some research questions for the key main questions you wish to be able to answer once the study is complete.
Who are you targeting? Think about what type of users/segments etc you need to recruit for the test to be representative for your demographics and research questions.
How to design the test to reach your desired insights? Think about what questions and tasks that are key to reaching your goal and to be able to answer the research questions you defined.
Create a script. Jot down all the questions you plan to ask and work with them, review them until you feel you've created a good structure. Make sure to leave time for in-the-moment follow up questions.
Secure incentives and start recruiting!
Invite people to observe (and document the sessions) - this helps building engagement about research and takes a lot of hassle out of documenting the insights.
Record the sessions and prepare appropriate consent forms - GDPR and NDAs should be sent to the testers beforehand for transparency and ethical reasons.
Start off with a meet n greet, try to find some shared interest or common ground with the tester before getting started to make them feel comfortable with you and the setting. Make sure they now you are not testing them, they are helping you evaluate the product, be sure to sign any NDAs and GDPR documents needed for proper documentation.
In all sessions, but especially when researching sensitive subjects or products that can be tied to sensitive subjects, make sure to clarify that the tester can cancel the test at any time without implications like them not getting the agreed compensation. This helps the testers feel safe and is a base in ethical research practices.
After all the formalities it's time to get started with the test and interview. Typically you'd start with asking questions about the person, their relationship to similar products etc, but also if they like to tell you a little bit about themselves. This is also a good time to tell them something about yourself so they feel it's a two way street, and not you interrogating them.
Introduce the task by asking them if they like to take a look at what you're working at, by doing so you are not the one telling them what to do and it will make them feel more at ease.
When the participant is performing the tasks and thinking aloud, make sure to make confirming sounds regardless of how well they are getting along in the product or task. Be patient and help them as little as you can. A common situation that I find myself in when facilitating tests is that the tester is hesitant and says "I don't know how to do this". Instead of showing them what to do, try to reason with them and ask them "Show me what you think you should do". Because if you show them how to solve the task then there's no real point in performing the test to begin with.
Make sure to have time for a de-briefing where you ask questions like how their experience was like, likes, dislikes, and if they struggled with anything. Also remember to ask them if they have any questions for you, and take your time to answer properly and thoroughly if they do have questions.
Ok, so you've done the testing. Now what? Depending on what the purpose for doing the research and what research questions you had defined you can aim for different levels of the documentation.
A big tip is to invite stakeholders, developers, other designers, POs, PMs etc to observe the usability testing sessions and give them the task of taking note on post it's. I prefer them to use a tool like Miro or figJam as it's easier to re-arrange and structure the notes, and it also acts ad documentation of the study. Make sure to give the observers different tasks if there are many of them.
One documents all the negatives
One documents all the positives
One focuses on documenting quotes
One transcripts
One is responsible for sending questions to you when they want you to ask the participant a specific question
This increases the changes of getting an unbiased documentation and to make sure to cover more things than you would if everybody would just listen in.
Create an affinity map of the post it notes to visualise patterns in common pains, needs, obstacles etc. Then I'd suggest to invite the stakeholders and/or the observers to a workshop where you run through all groups and do a voting exercise to get the teams input on what insights they think are the most important ones. You don't have to agree with this, but you open the floor for them to share their thoughts and reflect on the insights before you hand over your documentation which otherwise, for some people, can be difficult to accept if they don't agree with the findings and wasn't involved in the research.
Depending on the characteristics of the findings, complexity of the product and scope there are multiple different ways to document what you've concluded. You could go for the classic presentation or research report, you can use customer journey mapping to visualise the findings in the context of the journey, create artefacts like archetypes or personas if you have that type of insights. You could also define customer groups and to a value proposition mapping exercise with stakeholders to share and analyse the insights in relation to the product.
The possibilities are really endless, it all depends on the scope, nature of research and insights, and your ambitions.
The most important thing in usability testing is the questions. Your questions and how you ask them will determine the outcome of the test and what insights you get. Be sure to ask open ended, non biased questions and don't nudge your testers into saying what they think you want to hear.
Be sure to check out the section on interviews for hands on tips and tricks on how to ask unbiased questions. 👇
Michael Margolis is an experienced researcher at GV and has great articles and hands on tips on how to build a research culture. Be sure to check his articles on Medium