Shopping experience
,
All
,
2
Aug
11
Min

Not Conducting Surveys? Then You’re Losing Conversions (The Ultimate User Survey Guide)

When you run a user survey, you’ll post direct questions with the goal of discovering their perceptions of specific issues. You can identify the most common sources of anxiety and friction.‍To get the most useful results, pose open-ended questions, like “What do you look for when you shop for bath products?”. This type of question allows users to voice opinions freely, bringing up points you hadn’t even considered.

Bonus Material: <cc-checklist-3>42 Personalization Case Studies<cc-checklist-3>

Research is the foundation of the conversion optimization process. 

If we don’t know the facts behind what is happening on the site — and why — we can’t accurately pinpoint what to fix and how to fix it. No matter how much we test.

‍As discussed previously, research can be roughly divided into three main categories:

  • Quantitative
  • Qualitative
  • Heuristic

Best Practices for Surveying

When you run a user survey, you’ll post direct questions with the goal of discovering their perceptions of specific issues. You can identify the most common sources of anxiety and friction.

‍To get the most useful results, pose open-ended questions, like “What do you look for when you shop for bath products?”. This type of question allows users to voice opinions freely, bringing up points you hadn’t even considered.

How to structure user survey questions to get actionable results

How you structure your user survey depends largely on your goal.

Your first task is choosing the type of survey at the right time, to the right audience, with the right questions.

‍You can implement customer surveys, on-site surveys, exit surveys, in-app surveys, pop-up surveys, or email surveys. 

The timing of your survey will affect the types of responses you get.

For example, if you use an exit survey that pops up as someone is abandoning their cart before purchasing, you’ll be sourcing answers from a very different consumer segment than if the survey popped up after a successfully completed purchase.

Before deciding what questions to ask, first consider what you hope to achieve with your survey.

The goal is always to obtain actionable insights.

Consider these questions:

1. Do you have any problems with the site? Yes/No

What does that tell you? You find that 63% of people answered “Yes.” 63% of visitors DO have some sort of a problem — but what is the problem? 

You have no clue.

The question should be used as an introduction to the next question, which will appear to people who answered “Yes”. It could be something like, “What problems are you experiencing with the site?”

Then customers have the space to develop the answers, and you can have real insights to work on.

Another not-so-useful survey question:

2. How would you rate this website?

If your questions are not specific enough, they won't result in actionable insights. You need to incite answers that help you find out what's wrong, or how to fix it.

Giving grades doesn't identify any problems. Instead, ask them something like:

3. What nearly stopped you from buying from this site?

This is a great question to ask in our post-purchase exit survey example as it invites an open-ended response to a specific question.

Good user survey design uncovers many issues

Every question should directly relate to uncovering usability issues, existing friction, and sources of anxiety.

‍When your aim is to learn about your prospects, however, you have to be even more careful with your questions. The questions you ask should be clear, short, and open-ended (think of questions starting with what, who, where, when, and why).

‍To find out about your audience, ask things like:

“What can you tell us about yourself (your age, gender, and any other information you feel comfortable giving)?”

And…

“What is the specific problem that our product solves for you?” 

Both of these questions will likely result in answers that tell you more about your target audience and help you establish personas.

Naturally, your questions will differ depending on whether you’re surveying people who already bought your product (a post-purchase survey), people who left the conversion funnel (an exit survey), or visitors who are just browsing (a traffic survey).

If you have to choose just one type of user survey to run, you’ll probably get the most useful information from surveying customers who did not buy or who dropped out of the conversion funnel.

How to time user surveys effectively

Timing is critical. Ask your questions as the experience is still fresh in their mind, and the feedback will be closer to reality.

Keep in mind that you should restrict your questions to the experience of shopping on the website. You are not conducting a customer satisfaction survey, so do not ask questions about the product itself — that comes after a customer has bought.

‍As a rule, you want to serve your surveys in the least disruptive way possible, which is why so many surveys are sent by email. That allows a user to choose the best time to respond.

Since email surveys also run a high risk of being ignored, we recommend including both a deadline and an incentive for completion. 

‍You can also use exit user surveys to find out why individual potential customers did not complete the conversion process. The survey should pop up when the customer displays intent to exit the page (often this is triggered by mouse velocity or location). Exit-intent questions should aim to identify the reason why the person is leaving the site.

Sometimes the answers will identify business decisions as to the source of the problem, such as product cost, shipping cost, or similar. In these cases, if the number of similar answers is significant enough, you may want to reconsider the offending policy.

‍Another option for survey timing is to enable user surveys to pop up for visitors as they go about your website. Use these sparingly! Make it easy to opt-out or close the survey to limit the negative influence that the interruption may have on your visitors.

In pop-up surveys, frame questions so that they help you identify user experience problems. These surveys may be triggered when the user has spent some time on the website, but neither leaving nor converting. Ask if they’ve found the information they need.

For any survey, one rule applies: keep them short. Anything over 10 questions generates a drastic drop in the quality of answers and the number of completed surveys. People just won’t stick with you for that long.

How to conduct a user survey & encourage responses

Once you have prepared your questions, you must decide how to incentivize visitors to actually answer them.

The most common incentive is to offer a discount or give access to gated, premium content.

Don’t run your survey for very long. Once you have more than 200 responses, you can safely stop the survey, as it’s unlikely that more responses will result in more insights or shed light on more issues. 

Limiting survey duration also increases relevance, cuts costs, and reduces negative effects on user experience.

Once the responses are in, the hard part of the survey process starts.

How to analyze and interpret user survey responses

After your survey, you’ll have a massive spreadsheet of text responses. Now, you must go through these answers and isolate useful insights.

When you have more than a thousand responses (at least 200 survey takers answering 5-6 questions each), can you imagine the fun?. 

The best response-filtering method to find the most important insights faster is to isolate certain keywords that represent the most-mentioned concerns or concepts.

This allows you to create categories of issues and sort responses accordingly. You may have categories like shipping cost, trust, price, ease of use, and so on. By dividing responses that belong in each category, you can rate the severity of each issue.

Once you have this, try to make brief summaries of the issues, using the exact words of the responders wherever possible. That way, you’ll be ready to prepare hypotheses for experiments.

Use however much time it takes to analyze the answers thoroughly and comprehensively. Try having a team divide and conquer the survey by question, analyze results individually, and then crosscheck the results.

Methods like word clouds or cluster analysis offer an efficient, effective way of structuring survey results so you can quickly spot the most serious issues.

[image]

​​An example of Gaussian distribution used in cluster analysis of survey results. The red word is the issue most frequently mentioned by responders.

[image]

Word clouds are another method of discovering patterns and commonalities across large amounts of data. The largest words represent the most severe issues mentioned in most responses.

Avoid these common user survey mistakes

  • If you ask abstract questions that do not tell you anything about your visitors, you’ll invalidate your results. Answers are only as useful as they are specific — and they have to point out real issues on the website.
  • Yes/No questions, or even “On a range of 1-10” questions can overlook problems that the test creator may not think of, but that customers notice. Always give survey respondents a chance to voice their opinions in their own words.
  • Mentioning a possible problem in a survey question can “lead” the customer and bias their answer. Avoiding leading biased opinions.

Another common mistake is targeting the wrong audience.

For example, if you conduct an exit survey, leave out people who have completed their purchase. Ideally, target the people who have visited a product page and checked the price, or added a product to the cart but did not complete a purchase.

If you are conducting a customer survey, target only the people who have actually completed the purchase. 

These seem obvious, but they are easy mistakes to make.‍

Survey responses should be analyzed comprehensively — because if you miss a potential issue, you are wasting the effort that went into making and running the survey.

And we want your surveys to go as smoothly as possible!

Knowing the exact count of responders who mention each issue is also vital. Not collecting enough answers is an easily avoidable (yet all too common) mistake.

Make sure you have enough responses for statistical significance. If the margin of error is too large, you can’t trust the results. For small samples, even a 10% margin of error may be too much.

And finally, keep your surveys short and sweet. Our rule of thumb is: 

  • If the onsite survey has more than three questions, it’s too long. 
  • Email surveys should have no more than 10 questions.

Surveys longer than 10 questions often fall prey to “the error of central tendency”: when respondents becoming fatigued, they respond with nonsensical answers or answers too short to be meaningful.

Pro tip: If you need to have more than 6-10 questions, make separate, shorter surveys, rather than creating one long one.

Knowing your customers benefits you in the long run

User surveys are a great method to get to know your visitors and customers. You can learn about their motivations, thoughts, and perceptions, and access the “why?” behind the “what”.

he “why” can help you improve your website and marketing copy to boost conversions, and give your users a more personalized, enjoyable experience. In the long term, you’ll establish a genuine bond with your customers.

‍We hope this short guide made user surveys seem more approachable and useful. Conducting them properly will provide you with a treasure trove of insights — and we mean that literally!

These insights will make you money and make your ecommerce more profitable than ever.

<cc-checklist-3-download>Download 42 Personalization Case Studies NOW<cc-checklist-3-download>

Join the fastest growing ecommerce marketing platform and start growing your business