I’m (hopefully) not going to be the first to tell you that user testing is vital for creating and confirming strategy in website projects.
That battle has already largely been won. What does remain is the question of how to do it well. It’s easier than ever to collect user data, but data is easy to misinterpret (and easy to warp into support for any opinion). It can also be difficult to translate into action. The following are several lessons I’ve learned on making the most out of user testing and gleaning valuable insight.
Your Users Are Not You
Talk about insightful! This first one sounds obvious, but the extent to which the users you are testing are just not like you is important to understand before we move on. They have different preferences, abilities, emotional ranges, digital skillsets, lifestyles, goals, families, and more. They interact with your test after possibly decades of experiences entirely unlike yours. The full data of their lives is impossible to account for with any amount of demographic filter questions.
This means they are going to do things you don’t (and can’t) understand. This is soft science. Let’s move forward with that in mind.
Set a Plan
Know why you are doing this. It might be soft, but it’s still science. Establish your objectives for testing, define your hypotheses, and investigate accordingly. What steps will you take, based on potential results? The work to strategize up front can save you from feeling lost once the results are in.
Aim for around 5 participants in your test for good results (and see this article from Jakob Neilson for a breakdown of exceptions), but don’t be overly anxious about that number. Even a single user can open your eyes to issues you haven’t anticipated, if you keep the following points in mind.
Watch for the Unexpected
Your test plan is your guide and provides the context for the tasks in your test, but be alert for things beyond what you’ve intended to look for. The entire point of testing is to learn the unknown, including the answers to questions you never would have thought to ask.
In my tests, I’ve come across users not recognizing commonly understood elements (for example, not realizing they have left the website when the navigation/design changes or that the primary logo links to the homepage) and using elements in ways different than expected (for example, using a calendar filter as a reference for what the weekend’s dates are, rather than as a filter). Foster natural exploration in your tests and pay attention to what users do that’s off the beaten path. It can highlight unexpected issues and identify potential opportunities.
Be Wary of Anecdotes
When you do observe unexpected behavior, analyze it carefully and try to find trends. Don’t immediately react from a single user’s behavior if the cause is not clear. Think through what may have led them to that behavior in the test environment. Consider changes to respond to their behavior an opportunity to consider, not a mandate. Does the change impact users who wouldn’t behave the same way? What is the potential cost of accounting for this behavior?
Understand the Context of the Test vs. the Real World
User tests can take a variety of different formats, but the fact that users know they are being observed has the potential for impacting their behavior. Keep in mind the context of the test. Users asked to explore a site are more likely to try harder to find information or move throughout the site than they would in a real-world situation. The test might give users more (or, in some formats, less) information about the interface than a regular user would have on the site. When analyzing results, consider how the format may have had an impact.
Use Qualitative and Quantitative Data Together
When possible, try to use both the subjective and objective data you gather to form a balanced picture. User testing videos (which you can gather through services like UserTest.io and UserTesting.com) and focus groups (which you can recruit for though services like UserInterviews.com) are great for understanding or identifying behavior you see (or can then look for) in your quantitative data (whether gathered from Google Analytics, Hotjar, or others). I’m particularly fond of using user testing videos to uncover potential issues and using FullStory to uncover how common the behavior is and what variations of the behavior other users are displaying.
Small Things Can Be Valuable
Despite the risks of overreacting to anecdotal evidence, small amounts of user testing are almost always more helpful than none. Look at services like OptimalWorkshop.com, UserTesting.com, or UserTest.io. Use small tests upfront to identify potential opportunities for further investigation, gain efficient insight, or build the case for further research.
When you make it a permanent part of your process and carefully consider the factors above, user testing can be efficient and effective. There’s no more excuse for not testing — don’t let there be one for not getting the most out of it.