In this guide, we’ll let you in on UserQ’s top tips and tricks to utilise our selection of remote user research tools in the most efficient way possible. After all, the better you strategise your tests and studies, the more reliable the data you collect.
So what are the must-follow building blocks to a successful remote user research study? Let’s discuss.
Choose the type of test wisely
Deciding which research method to use all depends on which stage you’re at along the design process journey. While card sorting and surveys are best for earlier research (when you’re getting to know your audience and you’re open to new ideas), tree testing, preference tests, and first click tests are most effective later on when you’ve reached the point of needing to validate your existing drafts or prototypes with valuable consumer feedback.
Have a read our full guide for more advice and information on which method to use and when. This will help you get a better understanding of what each methodology is used for and when they’re most effectively utilised.
Keep your studies short and sweet
It’s easy to think the bigger or longer the study, the more useful data you’ll collect – but trying to squeeze as much as you can out of your research can actually have the reverse effect. A test that’s too long may result in participant fatigue where users get frustrated or bored and eventually give up, or don’t answer the questions correctly.
To reduce abandonment rate, we recommend limiting your test tasks, keeping them brief and to the point. Don’t overload a survey with endless questions, or pack your card sorting tests with too many cards to sort through. We also recommend prioritising the most important assumptions that you need to investigate. For example, ask first about the value proposition of your product and only later (with a second test) about the information architecture or specific features.
Or if the topic of the test is already focused on product features, try to split it up into different flows and sub-topics. For example, with a prototype test for an online store, we’d recommend first analysing the user identification flow and later the checkout flow. In this case, it would also make much more sense for the user if the test kept a focus on just one hypothetical scenario instead of going all over the place with multiple scenarios to keep up with.
Short and sweet = happier participants who aren’t overwhelmed with the amount of information and questions being thrown at them.
Consider sample size
The size of your participant pool is also important to get just right. Having a small sample size brings no statistical significance to your findings or conclusions, plus you’ll run the risk of outlying anomaly data skewing results. On the other hand, a too-big sample size can make your study complex, costly and time-consuming to run.
Instead, we recommend finding the balance with a manageable and realistic sample size based on the type of test you’re carrying out. We’ve put together a full guide on the ideal sample size for each type of test. Take a read.
Watch your wording
There’s an art to writing user research questions and tasks. The more context you provide participants, the better the answers they give. A useful way of introducing questions is to provide a hypothetical situation that gets people thinking. For instance, let’s say you’re designing a new tree test where participants are tasked with finding a product on your site navigation, you could ask: “You’re hosting a dinner party and need some new cutlery, where would you find the silver forks?”
The general rule of thumb is to set instructions with as much information as possible in a concise and succinct way. This is to make sure that you have a simple question regarding about one single element, not a few (which can be confusing or overwhelming). For example, we’d recommend asking ‘What do you think of the design?’ rather than ‘What do you think of the design and content?’
It’s also important that you don’t write leading questions that unknowingly sway the answers. Say you’re asking for feedback on your website design, wording the question as: “Do you like the website design?” might influence them to provide positive feedback. Instead, try to keep the question generic and neutral like: “What do you think of the website design?”.
Label your visual cues
If you’re conducting a preference test or first click test, make sure you label any images with appropriate and relevant titles. Why? It provides context and ensures participants know exactly what they’re looking at. Titles also avoid any confusion when participants are tasked with written feedback, as they can easily refer back to specific designs using the title names.
When labelling images, you also need to make sure the labellings are not misleading to participants. For example, try to avoid code or numerical labellings that could indicate one image is better or worse than another. So avoid using A and B, or 1 and 2, etc. Instead, use detailed and worded descriptions to avoid any potential bias, ie. ‘Full-width landing page’ or ‘2 columns landing page’.
Use the right participants
Our UserQ platform makes it super easy to find the right kind of participants for your study. You can choose to share the test link with your own set of participants OR you can recruit from our UserQ panel. Use it to target participants from specific demographics, including age, gender, nationality, and lots more.
We always recommend customising your participant pool to the target audience of your digital product. So, if you’re creating a survey for an electronics store that operates solely in Abu Dhabi, you’ll probably get the best, most valid results if you select participants who reside in the city itself. Just remember, the broader the panel, the faster the results come in.
But it’s not just about location, other demographics and criteria come into play when selecting specific participants, ie. education, financial status, and even whether they are existing users of your product.
The criteria you select should always be based on your objectives. In other words, before you start reaching out to people to take part in your study, it’s important you ensure they tick the right boxes. Not every test is suitable for everyone and some demographics just don’t make sense to include. For example, if you’re building an online learning community targeting university graduates, you might assume that participants with low education would not be a suitable target.
Make use of introduction questions and conclusion questions
Introduction questions from our UserQ platform are great for both understanding user behaviour, habits, and perceptions and building context for participants to let them know what they’re in for and what to expect.
Conclusion questions from our UserQ platform help you gain user feedback on the test they just completed so you can understand how to improve the methodology next time around. Just make sure you keep the questions contextual to the study. For example, if you’re conducting a card sorting test for a food grocery website, don’t start asking about what car they drive.
Ready to get started?
Take a look at our full methodology guides to discover more about how you conduct and analyse our selection of remote research methods.
Sign up as a researcher and create your account for free – plus no monthly commitments.