As I’ve said before, websites should be built for the user, not the business that created them.

This issue of Conversion Gold is going to talk about how you can do that. We’re going to dive into the best way to understand what users think about your website.

It’s difficult to build a great website in a vacuum — which is exactly what an overwhelmingly large amount of companies do. Learning from (and collaborating wherever possible) with your customers should enable you to build a site that meets all of their requirements in a way that creates value for the business behind it. Such a website should be a win-win, both highly performant for the business, and very easy for their customers to use.

Types of feedback

What we’re really after here is simple feedback. We want to be able to ask people questions about our website and hopefully, learn from their answers. This feedback allows us to more closely understand our customers needs and expectations and build features that support them.

Any tool that enables you to ask customers questions should suffice, however there are a few formats that are generally accepted to be more actionable. The most common ways to gather this type of feedback are: focus groups, on-site surveys, and remote user testing.

First, decide whether or not you’re going to talk to your actual users or use a panel from somewhere. Speaking to your users typically requires that you capture them on your site through popups or email surveys. Using a panel allows you to pull feedback from a filtered list of people who belong to a particular demographic. These platforms often have ways to filter by demographic or ask pre-screening questions to make sure the users look (at least a little) like the customers that visit your site.

The other crucial thing to consider is the scope of the feedback. Is this going to be a full-on interview, or is this a quick two or three item questionnaire? Will it be conducted via the one-way submission of a form, or is it multi-engagement/realtime through email, phone, or video?

Focus groups are the most difficult and expensive to run. Typically they’re done in person and can require sizable investment to compensate groups of people for their time. They’re probably too much for most of us.

At the other end of the spectrum, you have on-site surveys. If you already have web traffic, the simplest way to get started is often to integrate a random survey of visitors. Ask them how they feel about the site, what’s easy to do, and what isn’t. On-site surveys are cheap and quick. For those familiar with it, Net Promoter Score (NPS) is quite similar to what I’m talking about, but more geared towards the products or services themselves.

You could also segment these visits by offering a pre-conversion survey and then sending a full email survey following a sale or conversion event. The latter allows you to ask users who completed your conversion event why they did so, and presumably allows you to learn how to encourage more visitors to do so.

Both on-site surveys and focus groups have significant drawbacks, when compared to user testing.

  1. On-site surveys:
    • users can get frustrated with popup / popover prompts
    • largely text driven
    • users aren’t always sure of how to describe an issue
  2. Focus groups:
    • typically used prior to launching
    • quite expensive (and intensive to run)
    • works best with physical products
    • slow, not feasible for rapid iteration

The third kind of feedback, and my favorite (if you couldn’t tell) is remote user testing.

What is remote user testing?

Remote user testing is a research methodology that includes having real users go onto your website and complete whatever tasks you set before them. Their entire session should be recorded to allow you to watch their behavior as they browse the site and try to complete “quests.” Audio is also a key component, as the user will narrate their experience and give you feedback in realtime as they use the site.

Here is what a sample video looks like:

https://userinsights.com/share/Cwk09PPH18

What are the various types of tests?

Most platforms involve one-size-fits-all videos. They might allow you to ask a list of questions or input a list of required steps. I’ve taken it a step further and listed below the primary “types” of tests that I run.

General Feedback

Testers spend 5 minutes on your site and describe to you what they learn, how they browse, what they see and what they expect.

Navigation Stress

Testers complete a series of goals and let you know what, if anything was difficult. Like an online scavenger hunt (with a conversion outcome).

Landing Page Test

Testers check out your landing page to make sure it’s clear and does what you think it should do.

Competitor Feedback

Testers give feedback on you and 3 of your closest competitors. See how you stack up, quickly and easily.

What kind of results can you expect?

I think a lot of people who dip their toes into user testing find it both nerve-wracking, and at first, not as actionable as they thought.

Yes, it’s going to be nerve-wracking. I always get nervous having 10 strangers record videos telling me exactly, point-blank, what they think of the site that I just spent months of my life on.

These are the two most common “complaints” I’ve seen from first-time user testing projects:

“It seems like people are just being mean.”

“They told me a lot of things they don’t like, but gave me no input on how to fix them.”

Henry Ford is attributed to having once said, “If I had asked people what they wanted, they would have said faster horses.” (Note: actually, it now appears that he may not have ever said that)

Most of us serve groups of people that aren’t better at the internet than we are. The vast majority of consumers are pretty darn non-technical. So, why then, would we expect them to have the vocabulary and mental-models to be able to suggest actionable feedback to us?

In reality, the actionable insights come from you. You, as the person in charge of this site, have to know what you’re looking for. You have to be able to identify points of friction (extra mouse clicks, mouse wiggles, rapid scrolling, etc) that the user might not call out. If they say they hate certain colors, or a certain font is hard to read, take a look at the context. It could be that the elements are competing too much for color dominance. You might also take note of the way they find information on your site and see if there are ways to make easier paths more prominent.

The quality of the feedback is dependent on a few things:

  1. How good the tools of the platform you are using are — you need to be able to ask specific questions and to filter/sort by demographics and device profiles.
  2. How good your questions are — any answer is only as good as the question that was asked.
  3. How good you are at paying attention to what the user does, not what they say. (But also noting what they say).

Once you get the videos done I recommend that you sit down, watch them (normally on 2x speed, if possible) and take notes. Write down any observations the tester makes. Note anything they say that they like, or dislike. Watch for periods of frustration, points of joy, points of confusion. These are the emotional aspects of your customer’s journey that you’ll want to focus on.

It’s also likely that you’ll find bugs, or other weird issues with device type and so on. It’s valuable stuff.

I like to take all of my notes and then combine them into “votes” for certain features or changes. When doing larger user testing projects, it’s not uncommon to rank issues based on consensus. Do enough tests and you’ll begin to see real patterns that affect a wide variety of users. Also of interest is that mobile users will (as expected) behave very differently than desktop users. Make sure you run specific mobile and desktop tests.

Wrap-up

I heartily suggest that you run some remote user testing. It’s one of the most used tools in my toolbox and is, by far, one of the most valuable. I’ve spent literal months of my life watching user testing videos. Even after all of these years, I still consider that time well spent.

Q&A

Do you recommend using the same users over and over again? Or should you test with new users each time?

This question, to unpack it a little bit, is asking about your second and subsequent testing projects. Is it better to use the same users who completed your first project (so presumably, they could respond that you have fixed their concerns)? Is it better to use new users to ensure they don’t have those same concerns?

I don’t think that it’s an awful idea to engage in a little back and forth dialogue if the platform you’re using allows it.

However, I do think that you should always use new users for testing projects. The reason that I run user tests is to find general issues that are true for most users, not to laser focus in on one user’s particular sensitivities.

If you were to continue to re-use the same tester pool (say, 10 people) for each of your tests, you’d be getting flawed results.

  1. Those users will already be familiar with your product and may have “learned experiences” that may color the results.
  2. You’d be optimizing for that subset of users with no real idea about the implications for the larger group of potential website visitors.

Additionally, I think that there is significant benefit to getting more eyeballs on your work. It’ll help you uncover new issues and bring breadth of insight, which can only serve to improve your product.

Is the feedback valuable even if the users aren’t exactly like my customers?

It really depends on what your intended outcome is. What do you want to learn?

The reason that I use user testing is to identify common issues that are applicable to typical users. That means that I’m looking for issues with my website design, messaging, and functionality.

Based on my experience, these issues aren’t correlated with demographics or persona-based details. Mostly they’ll affect all users equally, regardless if they’re likely to buy your product or not.

The blatant exception to this though, is in cases of diminished ability to use a typical website due to disability, or lack of technological proficiency (age, niche, etc.).

I don’t think user testing is a great fit for gathering the types of insights that focus groups and/or surveys can unlock. It’s great for getting feedback on your website, but not so great at uncovering insights such as “what other products are you considering?” — those are best left to specific users.

How often do you do user testing?

The question of cadence comes up quite a bit. I don’t really think that you should set out for a particular schedule of testing, except in cases where your budget requires it.

I typically engage in tests when I change something. I will always test sites before I launch, and during the development process. These are the times at which I’m testing most frequently and iterating on that feedback.

Following the initial MVP/launch stage, I’ll only test and iterate when I have big ideas or new designs, functionality, or messaging. I tend to rely on a/b testing and quantitative metrics in order to generate new ideas and iterations once a business is launched. However, I probably only do that because I already have used a lot of qualitative (user testing) analysis prior to getting to that stage.

That said, new features, new designs, and new messaging all go through the user testing process. I avoid testing on tweaks and small changes that are easily quantified through analytics.

If I had to put a timeline on it I’d say that I engage in patterns of testing every 4-6 months on average, per project. It’s not something I do every month.

How many tests do I need?

Probably 5 tests per “thing” you are trying to figure out. This number comes from an oft-cited study by Jakob Nielsen:

The most striking truth of the curve is that zero users give zero insights.

As soon as you collect data from a single test user, your insights shoot up and you have already learned almost a third of all there is to know about the usability of the design. The difference between zero and even a little bit of data is astounding.

When you test the second user, you will discover that this person does some of the same things as the first user, so there is some overlap in what you learn. People are definitely different, so there will also be something new that the second user does that you did not observe with the first user. So the second user adds some amount of new insight, but not nearly as much as the first user did.

The third user will do many things that you already observed with the first user or with the second user and even some things that you have already seen twice. Plus, of course, the third user will generate a small amount of new data, even if not as much as the first and the second user did.

As you add more and more users, you learn less and less because you will keep seeing the same things again and again. There is no real need to keep observing the same thing multiple times, and you will be very motivated to go back to the drawing board and redesign the site to eliminate the usability problems.

After the fifth user, you are wasting your time by observing the same findings repeatedly but not learning much new.

https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/