Website usability testing is an often overlooked phase in designing a website or application. It can be as simple as glancing through heat maps or as sophisticated as using software to track eye movement. But no matter how advanced you get with it, usability testing provides valuable feedback for UX designers to create functional, intuitive interfaces.
In this primer on website usability testing, we'll cover what it is, what it isn't, why it matters, and how to set your own usability testing in motion.
Usability testing is the process of testing the functional efficiency of a website, application, or other digital product by observing real users interact with the product. Researchers can monitor user interactions through session recordings or in real-time.
While researchers can conduct usability testing on products beyond websites, this post will focus on the subject of website usability testing specifically.
Website usability testing is vital because it is the only way to truly understand how unbiased real people interact with a website prototype. Without it, product managers and UX designers can only guess where problems might arise. By conducting user testing, product teams can directly uncover usability issues and gain authentic insights into the user experience.
Early-stage user testing can also catch mistakes before you get too far along in the development stage if (but more on that later).
There are different types of usability testing carried out in different ways. The line between usability testing methods and types is murky — in some ways, a method can be a type — but here are some general category divisions.
Qualitative testing collects anecdotal insights and findings of how people use the product. Perceptive qualitative researchers can make deductions based on intangible clues like facial expressions and body language. Thus, qualitative research helps uncover usability problems and user preferences.
Quantitative testing collects measurable insights to show metric-based trends in user behavior. An example of quantitative testing would be measuring the time it takes to complete a task successfully and time spent on one specific task. Quantitative testing establishes benchmark user metrics.
Design teams can conduct user testing throughout the entire creation process. However, some tests are better suited for collecting feedback in the website planning stage. Early-stage testing can ensure you don't get too far along and waste resources on a website that doesn't work well for users.
Later stage testing is completed with a more fully fleshed-out website prototype.
In-person user research is done on-site. The evaluator, participant, and sometimes a facilitator gather in a usability lab. In-person testing is almost always moderated, as opposed to remote testing.
Remote testing can be divided into moderated and unmoderated.
Similar to in-person testing, with moderated remote testing, the test facilitator interacts with the test participant and asks them to perform specific tasks. The difference, of course, is that the facilitator and participant use screen-sharing platforms and a webcam to interact.
With unmoderated remote testing, remote testing software conducts the usability evaluation by guiding the user through the task instructions. The software then collects user questions, feedback, and session recordings. When the test completes, the software sends the recording and metrics to the researcher for review.
Remote usability testing is more popular than on-site testing because it is less expensive and easier to conduct in terms of scheduling and logistics.
Even the most carefully designed website can still benefit from user testing because even the best UX and product designers can't foresee every real-world problem.
User testing can increase a website's user engagement by revealing what areas people like and which ones they skip over. You can use the findings to reallocate development resources to enhance the areas that boost engagement. You can also decide whether it's worth improving the elements users gloss over — or whether it's a waste of time.
Even though the people who built and manage the website undoubtedly know the property better than anyone else, it's for this reason that they can sometimes be blind to a website's inefficiencies, like the proverbial fish swimming around that asks another fish, "What's water?"
Usability testing uncovers user flow issues. Perhaps the website navigation is confusing or inefficient. Maybe the homepage and other web pages are not easy to navigate from one through the others or back and forth.
When measuring website usability, there are three general topics to keep in mind.
Efficiency is crucial because it tells you how quickly it takes a user to complete tasks. The goal is to decrease the time and number of clicks it takes new visitors to complete a pre-defined measure of success. For Facebook, a measure of success is how long it takes a new user to add a certain number of new friends. For your website, it might be how quickly users download a brochure from your website. Efficiency measures time.
Effectiveness tells you what percentage of users can complete the desired task. In website terms, effectiveness is parallel to a conversion rate in many cases. Effectiveness measures the completion rate, usually expressed as a percentage.
User satisfaction tells you how your users perceived the experience of your website.
Satisfaction measures a website's comfortability and acceptability of use — or perhaps more simply, how happy your users are to use your site. Many testers try to survey this metric on a scale of 1 to 10.
Now that we've covered what usability testing is and why the data it provides is so valuable, let's discuss how to design a process to conduct your own user testing.
As with most complex problems, begin with the end in mind. To choose the metrics you need to monitor and gather, you've got to start with the end in mind. Think about the top goals for your website and decide what user actions you want to optimize for.
Your stage of development will partially determine the test type you choose.
For example, early-stage website testing is often used to help determine a website's architecture. Examples of tests suited for this purpose are:
Card sorts help you determine the best structure for your website. In card sorting, you create cards with page concepts, one per card. You then ask users to sort and categorize the cards to inform the design of the website architecture.
Tree testing is almost the inverse of card sorting. Instead of asking users to sort individual concepts into trees, you give testers pre-defined text-based navigation and then ask them to complete specific tasks.
Tree testing measures how efficiently and effectively testers can complete the tasks with the ways you have structured the concepts. For example, an e-commerce tree test might test to see how well users can find a woman's olive green trench coat based on the given page structure.
Tree testing works better than card sorting for website redesigns since you start with an existing website architecture.
Functional salience testing helps you determine your most important, bigger picture concepts. You have users identify the top 3-5 (approximately) page concepts in functional salience testing. For a SaaS website, the top three might be Pricing, Technical Features, and a Free Trial.
Later stage testing is about understanding contextual inquiry, which is where field sorts come in.
Field sorting tests how users would interact with your website in their own environment (instead of in a controlled lab). In the context of website field testing, you might ask users to interact with your site on their own devices to replicate the "true" experience on desktop, tablet, or mobile app.
Sophisticated software exists to track user eye movement to determine where testers' eyes are drawn. Another way to determine the site areas where users interact or click the most is to create heat maps. Software like Hotjar can help you build these engagement metric profiles.
A/B testing, also known as split testing, is popular with landing page testing. In this example, a user would be presented with two landing page options. A "winner" page would be chosen based on performance. In A/B testing, generally, only one variable is changed at a time to determine which change is truly optimal.
Multivariate testing is very similar to split testing, but it compares multiple variables at one time. It is more time-efficient but is more complicated and thus generally requires special multivariate testing software.
Both multivariate and A/B testing are essential to inform UX design; however, usability purists will insist that usability testing is about how users interact with a website. A/B testing is about optimizing a conversion rate while user testing aims to make interaction with the website as easy and pleasant as possible.
Obviously, conversion rate optimization and usability optimization are two sides of the same coin. If you make your website easier to navigate and more pleasant to use, your conversion rates will naturally improve as well. A rising tide floats all boats.
No matter your opinion on conversion rate vs. usability optimization, everyone can agree that user testing is an integral component to designing a well-planned, elegant website experience. With just a little critical thought on your overall website objectives, you can quickly implement a few website usability tests to ensure your project is a knockout success!
Pastel speeds up feedback collection and approval schedules by making it easy to review and comment on live web pages. With one-click sharing, all project stakeholders, both internal and external, can easily review and give feedback — aggregated in one place! Try Pastel free today.