Stop Guessing & Start Growing: 6 Proven, In-Depth Conversion Research Methods to Increase Your Ecommerce Sales
Cha-ching. There’s the sweet (virtual) sound of another ecommerce sale rolling in.
You might already be making millions from your ecommerce store. But is your site converting as highly as it should be?
Maybe you’ve tried “fixing” your conversion rate without much success.
- You’ve run split tests that generated inconclusive results…
- Changed page layout and button color…
- Experimented wildly with all sorts of different tactics, hoping something would produce drastic results…
Our conversion optimization agency has worked with ecommerce stores with modest sales all the way to Fortune 500 companies making over $100M, and we’ve found one thing holds true:
The most common cause of a mediocre conversion rate is a lack of proper research. [Tweet this]
Up to 75% of companies rely on nebulous “best practices” and cringeworthy “100 CRO Tests to Run”-style posts to guide their CRO efforts.
Even ecommerce teams who analyze their data daily often struggle to convert data into actionable optimization insights.
And according to Behave.org’s State of Online Testing Survey, over 50% of companies try to increase sales and conversions by simply getting together, brainstorming ideas, and implementing the discussed ideas to see if they have an impact.
AKA, more than half of online businesses approach optimization by saying “Who Knows? Let’s Try It And See What Happens!”
In most cases, any impact from this approach is minimal, if barely detectable.
And at the end of one cycle, the whole routine starts again. Sound familiar?
Starting today, you’re going to stop rolling the dice
In this post, you’ll learn:
- Why your company’s efforts to increase conversions aren’t getting results
- How to figure out what’s wrong with your website
- And the specific methods that the highest-converting ecommerce stores use to fix those problems.
(I recommend bookmarking this page for future reference. Chances are you’ll want to come back here a few times.)
Table of Contents - Find it Fast
- 1 Starting today, you’re going to stop rolling the dice
- 2 Here’s why all your efforts to increase your ecommerce conversion rate have been fruitless
- 3 Ready to do your own research and set up meaningful tests?
- 4 Zoom out to see the big-picture conversion research process
- 5 Use analytics to pinpoint conversion problems and opportunities
- 6 How to read your customers’ minds (no black magic involved)
- 7 Method 1: Session Recordings
- 8 Method 2: User Testing
- 9 Method 3: Create Website Polls
- 10 Method 4: Send Customer Surveys
- 11 Method 5: Interview Your Customers
- 12 Method 6: Leverage Chat Logs
- 13 Distill your data: How to analyze polls, customer surveys, and interviews
- 14 What are you going to do next, ecommerce optimizer?
Here’s why all your efforts to increase your ecommerce conversion rate have been fruitless
There are literally hundreds of possible reasons why your efforts to increase conversions aren’t working.
Here are a few of the most common issues:
1. You don’t know what the real problem is
Imagine you go to the doctor because your knee is killing you. But instead of addressing your knee, he gives you advice for healing headaches. Ridiculous, right? And totally irrelevant.
This is essentially how most ecommerce companies are addressing their conversion issues. Instead of segmenting their data and doing proper research to diagnose what’s hurting conversions, they start randomly changing things all over their website, hoping something will lead to an improvement.
For example: You’re not selling enough of Product X, so you decide to redesign its individual product page, thinking that’s the problem — when in fact, the real problem is hiding inside your checkout flow.
You wouldn’t try to heal a disease without knowing what the disease is and where in your body it’s located.
So why are you trying to fix your conversion problems without diagnosing them first?
2. You’re letting assumptions and “gut feelings” drive
In business, as in life, it’s often best to follow your gut.
In conversion optimization, your gut will waste your time, lead you astray, and can even exacerbate your conversion problem.
Have you ever attended a team meeting where you discussed what website changes to make?
Usually, one of two things happens in this type of meeting:
- The entire discussion ends up being driven by personal opinions… and not data.
- The discussion is driven by the highest-paid person’s opinion (HiPPO)… not data.
Personal opinions, gut feelings, and assumptions are the worst things you can possibly base your conversion efforts on.
But even if you’re doing it, you’re not alone. A study of nearly 800 marketers at Fortune 1000 companies found that the vast majority of marketers still rely too much on intuition.
As for the few who do rely on data? Well, for the most part, they’re not using it correctly.
Be critical. Question everything. Never assume.
If someone says you should move your website’s navigation menu from the top to the right side of the page — ask them why.
If someone says you should test a blue button instead of an orange button — ask them why.
If someone says, “Hey, I saw our competitors doing X. We should do X, too” — ask them why.
It’s one thing to make a design decision.
It’s something entirely different to fix a usability problem identified by analytics, user testing, session recordings, eye tracking, or all of the above.
I’m begging you: Don’t waste years going in the wrong direction
Let me share a quick horror story.
A company we worked with was proud to say they were very data-driven.
They were sending out surveys, had their analytics set up perfectly, and were using Optimizely to A/B test.
The truth: it was all a smoke screen.
Even when this company was sending out surveys, they weren’t analyzing the responses to the extent they should have. They weren’t reading individual answers.
It was great that their analytics were well implemented, but they had no one on the team who could perform in-depth analyses.
As for A/B tests, we discovered their past 2 years of testing were a complete waste.
Most of their tests looked like this… stopped too early with an inadequate sample size for what was being tested.
Instead of testing data-driven solutions to problems identified by research, they were launching tests based on their team’s gut feelings about what should be tested: button colors, random headline changes… nothing meaningful.
The result? They never achieved significant uplifts. Which means they made decisions based on false positives and false negatives, thinking these were the right, “data-driven” decisions.
Eventually, they decided conversion optimization just didn’t work for them.
We estimate that the two years they spent floundering resulted in millions of dollars lost.
So let me ask you: Are you truly making data-driven decisions? Or do you just believe you are? Instead of letting uncontextualized “data” steer your ship, aim to be data-informed.
3. You’re copying your competitors or blindly following blog advice
When you don’t know why your conversions aren’t higher, when your efforts aren’t moving the needle, when you’re not sure what to do, you might try taking shortcuts like…
- Looking at what your competitors are doing
- Copying tactics from successful conversion optimization case studies
- Implementing practices from “101 Things to Test to Boost Conversions Overnight” blog posts
It’s easy to assume that what your competitors are doing is working, but how do you know they’re not in the same pickle as you?
For example: What if the checkout flow your competitors are currently testing turns out to be the worst thing they’ve ever tried?
You don’t know what you don’t know. It’s good to be aware of what your competitors are doing, but copying them — when they might not have a clue what they’re doing — is a step in the wrong direction.
Now, about those impressive case studies you find all over the web.
You know, the ones claiming that changing the font from Cambria to Arial increased their conversions by 54%, or explaining how changing the size of their call to action button doubled revenue?
Sorry to say, but most of these are bullshit.
I’m not saying the results were made up. I AM saying that we have no idea how the majority of these case studies were tested.
Coming up with numbers isn’t hard. Arriving at numbers that can be trusted is a whole different story.
And in the case of “101 Things to Test”-style blog posts or how-tos, what worked for them might not work for you.
In case I haven’t made this clear yet: every website is different.
Different audiences, traffic sources, visitors’ purchase intent, geography…
For example: You might find a case study explaining how Amazon increased conversions significantly by changing their buttons from blue to yellow. And maybe they did.
But chances are you’ll never be able to replicate tests like this. Most websites don’t have enough traffic to test such small, specific changes.
Evan Miller’s sample size calculator is a good place to start to understand how much traffic your test variations will need.
Instead of assuming someone else’s tests or changes will work for you, find your own opportunities for conversions. Base your tests on your own research. That’s how you can move the needle.
Ready to do your own research and set up meaningful tests?
Now that you know that…
- Gut feelings kill conversions
- Case studies are mostly unreliable
- Copying competitors or “magic” blog posts is useless
- And doing your own research is the only effective way to increase conversions…
You might be wondering exactly what your research should involve, and how to execute it so your work can finally be fruitful.
I’ll get into that shortly. Stick with me.
But first, I want to warn you that there’s no magic trick to this process. This is for serious optimizers who are ready to do the research and hard work. There’s no shortcut.
If you’re game, keep reading and I’ll walk you through the process you need.
Or, you can by get us to do this process for your ecommerce store by requesting your optimization proposal here.
Zoom out to see the big-picture conversion research process
The conversion research process ties together data from all the research methods I explain below.
You can use this process to create data-driven hypotheses about what to change or test on your website for maximum results.
If you’re just getting started with this type of research, you might find that trying to launch all of these methods at once can be overwhelming — especially if you don’t already have a clear process for analyzing the insights you’ll collect.
The process includes both quantitative and qualitative research methods.
Quantitative data can help find the biggest causes of low conversions, the holes in funnels, and the technical problems that can hurt your user experience. It’s the WHAT.
Qualitative data (think surveys, interviews, and review mining) is the most neglected piece of the puzzle. It’s the WHY.
Did you know that only 39% of companies that A/B test use surveys? Unfortunately, they’re missing one of the most important parts of the process.
You can do quantitative research to find the numbers that highlight problem areas. But your analytics won’t tell you why people aren’t converting.
Qualitative research helps you discover:
- Who your best audience is
- What your visitors want and need
- Their biggest pain points
- Their objections, concerns, doubts and hesitations (AKA why they’re not buying)
- And what makes them “tick”
Let’s dig into what your customers are thinking and feeling.
Use analytics to pinpoint conversion problems and opportunities
Like the 95% of companies that test, you’ve probably already installed a digital analytics tool like Google Analytics.
The obvious thing is that you have to use it. The not-so-obvious thing is that many companies are only scratching the surface of what these tools can do.
Self-diagnose to see where you stand:
- Are you analyzing different audience segments to look at how your numbers (such as revenue per visitor and conversion rates) vary by segment?
- Have you set up goals and funnels in your analytics tool? If so, do you regularly analyze them to see where visitors are dropping off?
- If you’re A/B testing, is your testing data integrated with your main analytics tool?
- Are you using custom reports to combine data sets to reveal the specific insights you’re looking for?
- If you’re a Google Analytics user, do you fully understand the differences between Users, Sessions, and Hits?
- In addition to your main KPIs, are you tracking and analyzing secondary metrics that are part of your visitors’ path to purchase (e.g. Add To Cart, Site Search)?
If you confidently answered yes to all of these questions, you’re already more advanced than many companies.
And here’s the real reason I asked: A lot of marketers report numbers instead of reporting insights.
Getting numbers is easy. Understanding the meaning of the numbers is the real task. For example, simply reporting a conversion rate is useless, because averages and percentages lie.
A spike will increase your averages and hide the truth behind your numbers
Instead, look at how your conversion rate is affected by different factors and segments, such as traffic sources, mobile vs desktop visitors, people who performed site searches, new vs returning…
I won’t dive too deep into how to use your analytics in this post, since:
- How you should segment and analyze mostly depends on your business, and what problems and insights you’re looking for
- Multiple people have written books on this topic. In fact, there are entire courses about it. Jeffalytics courses and CXL Institute GA classes are great places to train yourself and your team to solve business problems with analytics.
The point I’m trying to make here is that simply being able to use Google Analytics and pull your numbers is not enough to increase conversions.
To find where your website needs attention, you have to understand why the numbers say what they say.
When you have mastered numbers, you will in fact no longer be reading numbers, any more than you read words when reading books. You will be reading meanings.
– W.E.B. Du Bois
When you’re able to use your analytics data to identify problem areas and opportunities, THEN you can use qualitative research to dive deeper into what’s causing those problems. This is why we created the Testing Trifecta method, which requires both types of data to be combined to achieve growth and A/B testing success.
How to read your customers’ minds (no black magic involved)
Quantitative research tells you the “what”. Qualitative tells you the “why”. [Tweet this]
Keep reading to discover some of the most useful approaches to conversion optimization research.
Method 1: Session Recordings
What’s a session recording?
Session recordings enable you to record your website visitor’s screen as he browses your site. Visitors are recorded anonymously, and there’s no audio.
By adding a simple line of code to the header of your website, you can record visitor behavior, including clicks, mouse movements, and typed text.
Then, your optimization team can review the recordings to detect conversion friction points and potential problems with your site’s user experience.
Why use session recording?
Simply asking your users questions is not enough. Why? Human beings have the tendency to say things that are the opposite of how they actually behave.
We’re also pretty terrible at predicting how we’ll feel and react to a given situation.
That’s why watching session recordings can be so helpful. They help you uncover the “unknown unknowns,” which can lead to “aha!” moments.
Pardon me for this analogy, but I like to compare traditional user testing (where the users know they’re being watched) to watching animals in a zoo. You’re observing their behaviors within a limited, constructed space. The animals can see you, and they might adjust their behavior accordingly.
Session recordings, on the other hand, are like watching animals in their natural habitats. When they’re unaware of our presence and unaffected by our guidance, we can observe how they really behave.
Real-Life Scenario 1
We were recently watching visitors go through the checkout flow of a client’s website.
At some steps of the checkout, abandonment rates were high, so we knew we had to dig deeper.
After watching a few hundred session recordings, we discovered that about 15% of the time that someone was going through the checkout, the “Proceed to Next Step” button that led users to the payment step would disappear.
This was a huge find — something that we wouldn’t have been able to figure out using only analytics. The arbitrary disappearance of this button meant that users were getting stuck in checkout, unable to complete their purchase!
Our client had no idea, and it was likely costing them hundreds or thousands of dollars every month.
Real-Life Scenario 2
In another situation, we detected friction at the payment step of a checkout flow. We used session recordings to see what was going on (censoring all sensitive info).
We noticed that many users were entering their credit card number in the “Name on Card” field, which was the first field in the payment step. This resulted in the user having to erase their number and retype it in the proper field once they noticed.
It was friction: a clear usability problem.
Knowing the problem, we were able to completely eliminate it by simply swapping the order of the payment fields.
How to start recording user sessions
With the tools available today, session recordings are virtually painless to set up.
Most tools will give you a snippet of code to insert in the <head> section of your website, and that’s it! The tool will start recording after the code is installed.
Know that some tools will automatically censor fields containing sensitive information (such as credit card fields), but a few others require you to add a line of code to a field to censor that information.
How to analyze your recordings
Analysis is crucial. Don’t just sit on those valuable insights.
Here are a few tips for setting up session recordings:
1. Use your analytics
Start by using your analytics to detect problem areas on your website that you’d like to explore. Although you can watch generic session recordings (which could be good to uncover some unknown unknowns) for efficiency’s sake, we recommend focusing on the areas where you already know there’s a major problem
That being said, problem area or not, we tend to always record the checkout flow. It’s the most critical point in your user’s path and can provide valuable insights.
2. Create segments to go deeper
Once you find your problem areas, we recommend seeing if the problem affects all of your key segments. Is it a an issue unique to a specific device or browser type? To new visitors?Knowing your target segments is key, since watching session recordings is very time-consuming. Having this info on hand allows you to adjust the filters in your session recording tool so you only see recordings that are related to the problem.
3. Sit down, start watching
If you’re using session recordings to dig into an issue that applies to both mobile and desktop, watch them in separate groups. For example, begin by watching 100 desktop sessions, then switch to mobile sessions. Batching sessions makes it easier to detect common behavioral patterns.
4. Document everything
Create a document to write down all of your notes, questions, and insights as you’re watching. Divide your document by devices and website sections to keep your insights organized in their specific categories.
5. Keep your analysis organized
Most session recording tools allow you to add tags, notes, and stars to recorded sessions. Although we suggest you keep all your notes in a standalone document, as finding notes in separate session recordings can be a hassle, use the tags as appropriate and star important recordings.
(Why? If your team asks you to show them the recording of a specific issue you detected, I doubt you’ll want to rewatch everything to find it. Stars and tags help quite a bit.)
6. And finally, watch as many sessions recordings as possible
For example, if we analyze a checkout flow, we’ll watch a minimum of 100 recordings per device type. Most tools allow you to skip moments where the visitors pause, and to increase the speed of the video. Consider asking a team member to analyze a batch with you to speed up the process.
Yes, analyzing session recordings takes time (as do most qualitative research methods), but it pays off.
Imagine if we’d skipped session recordings for the client with the disappearing checkout button problem. It might have flown under the radar for much longer, costing them millions.
No time to do all of this yourself? Need help getting it done? We can help you, as we’ve helped companies like L’Oréal, Frank + Oak and Kiehl’s. Request your free proposal here.
Session recording tools we love
- SessionCam – For larger companies ($$$)
- ClickTale – For larger companies ($$$)
- Inspectlet – For medium businesses ($$)
- Hotjar – For small and medium businesses ($)
Method 2: User Testing
What is user testing?
Though user testing is popular with software companies, I’ve noticed ecommerce companies still have some catching up to do with this method.
User testing allows you to evaluate the usability of your website by observing how people complete assigned, specific tasks on the site.
You might ask users to:
- Find an item they’d buy
- Search for and compare specific items
- Go through the checkout flow
Enlisting users (who match your desired demographics, of course) to execute tasks helps you understand how your target market uses and navigates your website and its features.
Is your site user-friendly, or unintuitive? Are visitors unsure where to click? Are they having problems checking out?
You might be thinking that user testing sounds similar to session recording, and they’re not too far apart. The biggest difference is that people participating in a user test are aware of it, while session recordings happen in the background as visitors are browsing.
Not only are people participating in a user test aware of it, they also have to execute specific tasks and describe their actions and thoughts aloud. In some cases, users are recorded on video so the optimizer can later analyze their facial patterns as they complete the given tasks.
Why implement user testing?
You are not your user. [Tweet this]
While some features of your website and its usability may seem obvious to you and your team, they might not be obvious or simple to first-time visitors.
As you watch other people navigate your website, you might start noticing that some people aren’t using certain elements as intended. And some might have trouble executing an action that seems intuitive to you.
Whether that action is filling a form, finding the product or information they’re looking for, or figuring out how to complete checkout, user testing is incredibly useful to quickly reveal flaws in your user experience.
Knowing those flaws gives you the power to fix them, improve the user experience, and hopefully increase your conversions.
How to conduct a user test
1. Pick a method: in person, or online
User testing can be done either in person, or online. Let’s go over some pros and cons of each.
In-person user testing
- Ability to interact with the participant and ask specific questions as they’re doing the test
- Eye-tracking devices and other psychographic equipment can be used
- You’re in control of the session time limit
- Expensive (recruiting and paying participants, renting equipment, facilities, etc). Average per-user cost is $171
- Takes more time to organize (sourcing and taking care of the participants, setting up the lab)
- In most cases, restricted to local participants
- Quick and easy to set up (if you’re using UserTesting.com, you can get your first test complete within 15-20 minutes)
- No need to personally recruit participants (unless you decide to do 1-on-1 user testing through Skype)
- No facilities or equipment to set up, tests can be launched from anywhere, anytime.
- Can’t use eye tracking or psychographic equipment
- Limited to participants signed up to the user testing platform (if you’re using a tool like UserTesting or TryMyUI)
- Testing sessions are limited to 15-20 minutes
I personally prefer UserTesting.com. It’s painless, quick, and gives me the insights I need.
In my opinion, the benefits of conducting in-person user tests over online user tests (for the purpose of conversion optimization) are marginal.
2. Decide who your testers should be
Generally, you should recruit testers who are part of your target audience. If you’re using online tools, you’ll have the chance to pick the tester’s attributes and required qualifications while creating your user test.
Most of the time, these tools also let you…
- Ask pre-qualifying questions
- Limit by age, gender, income, or geography
- Specify the devices and browsers you’d like your testers to use
Five to 10 testers is enough in most cases. Be aware that if you’re running user tests for both mobile and desktop, you should run 5 to 10 tests for each device.
Feel free to use more testers if you want, but know that more than 15 testers is unnecessary and can diminish your ROI.
3. Define your tasks
List your website’s micro-conversions. These are small actions, such as creating an account or adding an item to the cart, that lead to the macro-conversion (a purchase).
Break it down like this:
The purchase can’t happen if User Jenny doesn’t go through all of the checkout steps…
and Jenny can’t go through checkout if she didn’t add an item to her cart…
and she can’t add an item to her cart if she can’t find what she’s looking for…
Map out every important action a visitor has to take to make a purchase. This will help you define the tasks you want your testers to do.
ConversionXL recommends defining 3 types of tasks:
- A specific task (e.g. “Find a diamond-encrusted silver bracelet in size 6 1/2”)
- A broad task (e.g. “Find a couch you like”)
- Funnel completion (e.g. “Buy the couch”)
4. Create task scenarios
Tasks are blunt, one-sentence action items for testers.
Scenarios give tasks context to engage your testers.
For example, if your task is “Find a diamond-encrusted silver bracelet,” the scenario could be “It’s your wife’s birthday in a week, and she’s been dreaming of a pretty new bracelet.”
UserBrain offers a great tip for getting the most value out of your users’ responses:
Avoid asking people what they think by writing something like “What is your first impression of …” or “What do you think about …”.
Testers will only comment on things like color schemes, font choices, layouts, and other visual design elements.
And while this kind of information might be interesting to you, it’s not something you need to hear from usability test participants.
5. Launch tests & analyze
Once your tests are complete, make sure to watch each recording attentively and take notes in a separate document.
If you’re using a service like UserTesting.com and you choose to ask your users post-test questions, be aware that there may be differences between their answers and behaviors.
For example, a tester may obviously struggle while going through checkout — but in the post-test questions, he’ll state that he encountered no problems at all.
In other words, pay more attention to users’ behavior than to what they say.
User testing tools we love
- UserTesting.com – Our favorite for online user tests
- Ethnio – To recruit testers from website visitors
- TryMyUI – Alternative to UserTesting that allows you to use your own testers
Method 3: Create Website Polls
What’s a website poll?
Website polls most frequently appear as little panels at the bottom left or right of a web page. They’re essentially mini one- or two-question surveys.
Optimizers can ask website visitors multiple-choice, single-answer or open-ended questions at during specific moments of their visit.
Polls are non-intrusive, and — if they’re set up correctly — can be very effective at getting quick and valuable answers.
it takes very little time for visitors to answer questions, since polls can appear at the exact moment on the page when a question needs to be asked.
Plus, they’re versatile and easy to launch. (Can you tell we love polls?)
Why create a website poll?
Polls gather great insights (quickly and usually painlessly) and you can ask your visitors questions at the exact moment and location you need their insight.
For example, if you’re trying to determine why a Product A has abnormally low add-to-cart rates, you can display a poll on the Product A page asking questions like “Were you able to find the information you were looking for?” or “What’s holding you back from purchasing this item?”
Unlike traditional surveys, polls don’t require you to craft an email, send that email, and provide incentives to your users just to get answers. Not to mention that you can only send traditional surveys to people who’ve already given you their email addresses.
So how do you ask questions of people who haven’t converted into leads — who are still simply visitors? You launch a poll.
One of our clients ran a PPC campaign that led visitors to a long, very detailed landing page.
There were no distractions on the landing page, since it was a sales funnel for one product. Most visitors read the whole page and scrolled all the way down to the “Buy” button at the bottom of the page.
With visitors now being aware of the price, you’d think the checkout process should convert well.
But when we looked at the client’s analytics, we saw that over 50% of people dropped off when they reached the first step of checkout (where they had to decide their order quantity).
Using analytics, we knew the “What”. Now we needed to know the “Why”.
We launched a poll on this step of the checkout asking visitors, “Is there anything holding you back from making a purchase?”
If they answered “Yes,” we asked, “What’s holding you back from making a purchase?”
Over a few hours, we accumulated more than 500 responses. After carefully reading and analyzing every single response, we found quite a few issues.
The main issue was that there was a lack of clarity in the copy on that page. Visitors were confused about whether their order was a one-time purchase or a monthly subscription (when in fact, both options were available).
We formulated a hypothesis tackling this problem, launched A/B test variations of the page, and after three weeks, the results were in. One of our test variations led to a 23% increase in orders.
So — using a quick poll that we manually analyzed and codified, we got insights that helped us determine a page’s top issues, launch a data-driven A/B test, and generate a significant increase in revenue.
My friend Talia Wolf recently explained:
While most experts say you should be data-driven, I believe you should be data-informed, yet customer-driven. Your number-one goal should be: make it about the customer. (via KlientBoost)
Fortunately, that’s exactly what polls and surveys allow you to do.
How to start polling your visitors
Installing a website polling tool is as simple as installing a snippet of code in your website’s header.
However, choosing the right questions and analyzing the answers — although it’s nowhere near as complex as creating a full-length survey — still requires time, effort, and a few tricks.
(See the pattern here? There’s no shortcut to doing proper research).
1. Dive into your analytics
Similarly to session recordings, you should dive into your analytics and find a problem area to tackle using polls before deciding to install them. Polling for the sake of polling is useless. And remember that poll answers require manual analysis and coding… so more polls = more time needed for analysis.
Once you’ve found a page or problem area you want to tackle, make sure you have a goal in mind. What are you trying to learn? What are you trying to solve? With a clear objective in mind, it’s time to create the poll.
2. Choose the right questions
Polling tools make creating polls easy. Picking the right question(s) is where complexity rears its familiar head.
The questions you ask in your poll are directly related to the quality and usefulness of the answers you receive.
You should have a goal in mind, and a specific problem to investigate. Use these to create a short, concise question for your poll.
Harvard’s program on survey research notes that the ideal question accomplishes three goals:
- It measures the underlying concept it is intended to touch on
- It doesn’t measure other concepts
- It means the same thing to all respondents
It’s also important to be as neutral as possible to avoid survey bias (where the respondent’s answer is skewed due to the way the question is formulated).
Pro tip: Avoid starting your questions with the word “Why,” as it can sound accusatory. Subconsciously, the respondent will feel the need to defend himself. Start with “What,” “Was,” or “How” instead.
Your question will vary based on what you’re looking to learn, so there’s no one-size-fits-all question that works all the time.
That being said, there are still some questions we end up using more often than others, such as:
- What’s holding you back from making a purchase today?
- What’s the purpose of your visit today?
- How would you describe (Company Name) to a friend?
- What’s the one thing that nearly stopped you from buying from us?
- Were you able to find the information you were looking for?
- What’s your biggest challenge or frustration in finding the right (Product)?
3. Engage website visitors with the Yes/No trick
Since your main question is open-ended, your poll will display a text input field for users to type their answers. Filling it out represents a commitment for visitors, especially when there’s no incentive offered for responding.
This doesn’t mean you should avoid open-ended questions in your polls. The trick is to preface them with a Yes/No question to initiate engagement. For example, you could begin by asking “Is there anything holding you back from buying?” and display Yes or No answer options.
If the user clicks No, the poll disappears. But if he clicks Yes, the question could change to “What’s holding you back from buying?” and offer a text input field.
Here’s why this is a great trick: The Yes or No answer doesn’t matter at all. It’s not an insight of any kind. However, it DOES reduce the perceived effort required to answer the poll.
Clicking Yes or No is simple. Typing an answer takes more time. By starting with the Yes or No, you’re enticing more visitors to take the survey. You’re engaging them, and making them more likely to take the few extra seconds to type a real answer.
4. Decide who needs to see the poll
Now that you have your question, and your question logic is set up, you’re almost ready to launch your poll. But there’s one more thing you should determine: who do you want to see the poll?
Some questions might not apply to all the different visitor segments. Determine on which devices your poll should be displayed, whether your poll should display to new or returning visitors only, to visitors arriving from a certain traffic source, or to everyone.
Also determine if your poll should show immediately when a visitor reaches the page with the poll; after X seconds; when the visitor takes a defined action; or only on exit intent.
Jen Havice of Make Mention Media says that improper timing is one of the biggest mistakes businesses make when they use polls:
“They’re either having the polls pop up too soon or on pages not relevant to the poll. For instance, if you want to find out why people are leaving without making a purchase, don’t ask them while they are still actively navigating through your site. Wait until they have gotten into the checkout process and then use an exit-intent, one-question pop-up.”
5. Analyze and extract the insights
Finally, once your poll has accumulated enough responses, you’ll have to analyze them. This is as important as creating the poll itself, but often overlooked.
Analysts are prone to multiple bias, analysis fatigue, and misinterpretation. Correct analysis is key, so we’ll dive into how to analyze polls and survey responses below (or click to jump to the Analysis section now).
Website poll tools we love
Method 4: Send Customer Surveys
What’s a customer survey?
While website polls usually target both new visitors and existing customers, customer surveys are sent via email to your existing customers.
Surveys are much more complex than polls. They ideally contain 8 to 10 questions and may offer an incentive to customers to encourage participation.
Plus, creating survey questions requires more thought, since there may be a greater risk of writin biased questions that can invalidate response data.
Questions are usually a mix of open-ended and single- and multiple-choice questions that are mostly used to segment data during analysis.
Why send customer surveys?
Send surveys to existing customers specifically to discover what made them convert. [Tweet this]
By asking a few select questions, you can find out what made them hesitate before buying, why they bought, and who they are as a customer.
This crucial info helps you define your different customer types, nail down their buying attributes, and find correlations between common survey answers and customer types.
We launched a survey for a client that sells natural skin care products. We wanted to determine why customers bought what they bought, if there was anything that made them hesitate, and who they were as customers.
The client sent the 9-question survey to customers who had bought from them in the past month. As an incentive to take the survey, they offered three $50 gift cards to be randomly awarded in three days.
Three days and over 5000 responses later, not only had we determined five main types of customers, from “Busy, well-paid moms with young kids. Focusing on making health a priority and looking to get rid of chemicals” to “Crossfit athletes where natural products are no-brainers, but having a hard time finding ones of great quality”…
… but for each of these customer types, we had also identified (among many other things):
- Words and phrases they use to describe their problems and situation
- What made them buy this company’s products
- How they make purchasing decisions, and what info is most important to them
- Their doubts and hesitations about the product they bought and their buying experience
- Which of the competitor’s products they’d bought in the past (and why they switched)
In the end, we extracted over 50 major data-driven action items from the survey responses.
This data is now being used to improve the messaging of the client’s social media marketing, increase the effectiveness of their PPC campaigns, and create landing pages with higher relevance for each audience.
Ultimately, this data was used as the backbone of a website redesign, which nearly instantly led to a 22% sales increase.
How is this possible, you ask? Simple: This data allowed us to create highly targeted messages such as product descriptions and visuals that were relevant to the client’s audience. We were able to address objections and doubts before they even became problems.
How to structure your customer surveys
Creating effective, unbiased surveys that result in the types of actionable insights we got in the case study described above isn’t as easy as it sounds.
Survey design is a science. There are many things to watch out for, and one single wrong word in a question can skew all the answers.
We won’t cover every aspect of survey design in this post, but here are some of the main action items:
1. Use the first question(s) to kickstart engagement
Begin the survey with one or two simple questions (either single or multiple-choice) to reduce the perceived effort of taking the survey, much like you’d do with a website poll.
For example, if you’re selling items for both genders, you could ask “Are you a man or a woman?”. This can also be useful for analysis, as you’ll be able to see how the answers to your other questions vary for each gender.
2. Structure following questions carefully
When you’re creating additional questions, don’t forget your business goals. It’s not uncommon to see surveys from large companies full of useless questions that were probably added last-minute by some executives.
You want to limit your questions to 10 to encourage completion, so it’s imperative to keep them as specific as possible and ensure the insights you get are useful and actionable.
A quick trick to see if the questions you come up with are even worth considering: ask yourself “What type of information can I glean from these responses?” and then, more importantly, “What will this information enable me to do?”
Are you asking something that’s nice to know, or something actionable that will inform your strategy and deepen your customer knowledge?
Paul Dunstone of Feedback Lite recommends focusing on specific questions that are designed to produce specific outcomes. Instead of open-ended questions, such as “How can we improve our checkout page?” he suggests a question like “Are there any parts of our checkout process you find difficult to complete?”
The difference seems minor, but the latter is more specific and will produce answers that are precise and actionable.
He also recommends including mostly multiple-choice questions instead of open-ended questions to make answers easier to analyze and turn into stats.
Now, there are many different opinions on this approach, but like many other conversion optimizers, I disagree with asking mostly multiple-choice questions.
Limited questions produce limited answers, and your goal shouldn’t be to make your survey easy to analyze. The goal of your survey is to get insights that will lead to business growth. I don’t care if it takes 10 hours instead of one to analyze qualitative answers.
Here’s why I’m not a fan of multiple-choice questions:
Let’s say you’re a clothing company. You ask your customers, “What is the main thing (Company) should improve?” and give multiple choices:
- Shipping times
- Customer service
This question would largely limit the quality of the insights you can gather. Because you work at this company, you wrote down answer choices based on your assumptions about what customers say are the biggest flaws.
Even if they’re things you hear frequently, humans have the tendency to remember the things they also agree with the most. It’s confirmation bias at its most basic.
But what if most of your customers have issues with the sizing of the clothes? Or had a problem during checkout, and that’s what they want (or need!) to see improved?
Only open-ended survey questions can reveal insights that you didn’t consider. [Tweet this]
If you have a good process for codifying your survey data, there’s no reason why text answers can’t be turned into statistics either.
Alex Birkett of ConversionXL mentions a few questions they use (and that we at SplitBase also love). Here are a few:
- What can you tell us about yourself? This helps you understand your audience demographics. Some people go really deep in their answers, and you might find their answers surprisingly valuable. But as Alex recommends, if demographics don’t matter, don’t bother asking this question.
- What made you buy [product name/from us]? Discover what’s already working for you, and what makes your customers tick.
- How is your life better thanks to [it/us]? Selling an outcome is way better than selling features. As I’ve mentioned often, people are looking for end benefits, and your solution is a means to that end.For example, someone buying a fitness machine is not buying it to “lift weights”. They’re buying it to look and feel great come beach season. This question is a great way to uncover the end benefit(s) your customers are looking for.
- What’s the one thing that nearly stopped you from buying from us? A simple way to uncover obvious friction points in the customer’s buying journey.
- How would you describe [company] to a friend? This is one of my favorite questions. It tells you the exact descriptors your customers use to explain your company. This is incredibly useful for your value proposition and identifying the “voice of customer”.
3. Be as neutral as possible
Let’s admit it, we’re plagued by biases. Being objective can be a challenge, but it’s critical when creating survey questions.
On the Shopify Plus blog, Ott Niggulis advises:
…avoid universal words — like “always,” “never,” “only,” and “just” — as well as imprecise words — like “often,” “usually,” “generally.” For one person, the word “often” could mean once or twice a week, while for someone else it could be once or twice a month.
You must also avoid leading questions, or those that lead people toward answering in a particular manner. SurveyMonkey gives great examples of leading questions:
“1. Bad Question: How short was Napoleon?
The word “short” immediately brings images to the mind of the respondent. If the question is rewritten to be neutral-sounding, it can eliminate the leading bias.
Good Question: How would you describe Napoleon’s height?
2. Bad Question: Should concerned parents use infant car seats?
The term “concerned parents” leads the respondent away from the topic at hand. Instead, stay focused by only including what is needed in the question.
Good Question: Do you think special car seats should be required for infant passengers?”
4. Send your survey!
Once you’ve created the survey, it’s time to send it. This sounds like the easy part, and it’s true that it’s not as complex as the actual survey creation, but keep a few things in mind to maximize responses:
- Avoid using the word “survey” in the email. People think of surveys as boring, lengthy, and irrelevant to them. Use descriptors such as “questionnaire” or “a few questions”.
- If your survey only takes a few minutes to complete, include the actual number (ex. “2 minutes”) to set a positive expectation.
- Make it about them! Offer an incentive for people to respond. Buy some Amazon gift cards, or even better, gift cards for your ecommerce store, for a few lucky respondents.
- Set a time limit. Urgency is powerful, and you want people to take the survey now, not in three weeks. We usually say recipients have 3 days to complete the survey in order to be entered in the gift card drawing.
This isn’t an ecommerce survey email, but it’s so good I couldn’t resist sharing it:
Here’s why it’s great:
- It focuses on the recipient of the email. Reading this, it sounds like I’ll get value from taking it (the outcome = a better product for users, AKA me).
- They clearly highlight that the survey won’t take 30 minutes to fill out by saying “Quick Survey,” “4 minutes to complete,” and “Take our 4 min survey now”.
- The email is personal. It’s customized with my name, easy to read, and comes from the co-founders.
- They’re offering a clear benefit for survey takers: to be part of their upcoming beta group.
And here’s an example of a terrible survey email:
It’s awful because…
- “Has asked Ipsos, a leading, global market research firm to contact you on its behalf”. Really? Why do I even care about this? Completely irrelevant and impersonal.
- “Should take 12 minutes of your time”. 12 minutes? That’s asking for a lot. I doubt anyone who values their time will even consider taking it.
- And finally… why should I take this survey? What’s in it for me? It doesn’t even mention this will be used to create a better flight experience for me in the future (or any other, stronger benefit for that matter).
Can you believe this email and survey were created by a “leading” market research firm? I’ll say it — it’s sad.
Swipe our survey email template (it works):
We frequently send surveys on behalf of clients who contract our conversion optimization services.
At this point, we’ve sent over a million of these emails and tested many different formats. Although the email should be personalized to match your company’s tone and “voice of customer,” here’s a template that we’ve found works great:
When you first shopped with [Company], we promised we would leave no stone unturned in providing [short, benefit-driven description of the company’s products].
And we want to make sure we keep that promise.
If you could take just 3 minutes and tell us a little more about your [Company] experience by answering a few quick questions…
We’ll put your name into our draw and you could win one of three $50 [Company] gift cards.
Here’s the link to the few questions we prepared for you:
[Link to Survey]
Fill it out before [Date in 3 Days], and get a chance to win!
Thank you so much,
No time to do all of this yourself? Need help getting it done? We can help you, as we’ve helped companies like L’Oréal, Frank + Oak and Kiehl’s. Request your free proposal here.
5. Analyze your responses
Phase 1 of customer surveys is creating and sending the survey. Phase 2 is analyzing responses.
Just as with polls, analyzing answers to open-ended questions will take time, but that’s where the real insights are hiding.
We use a codification methodology to turn our qualitative insights into actionable data. We’ll get into the details of how to do this below (or click here to read it now).
Customer survey tools we love
- SurveyGizmo – Get their market research plan for a built-in codification tool for analysis
- Google Forms – Simple and free
- Typeform – Beautiful surveys and great experience for survey respondents
Method 5: Interview Your Customers
What is a customer interview?
Customer interviews are essentially customer surveys that you conduct in-person or over the phone.
Why interview your customers?
In addition to all the reasons you’d want to launch a website poll or send surveys to your existing customers, customer interviews give you the flexibility of deciding which questions to ask as the interview goes on.
This approach is particularly useful after doing an online customer survey. If you want to know more about a customer’s specific survey answer, you can follow up with a call (with their permission), ask for clarification, and ask additional questions you didn’t include in the survey.
The interviews we conduct for clients usually last between 15 and 30 minutes.
How to conduct customer interviews
You can do interviews over the phone, in person, using Skype…
We prefer in-person or via Skype, so we can see the interviewee’s real-time emotional reactions.
1. Pick who you want to interview
Sure, you could pick up the phone right now and dial any contact you find in your CRM.
But because the main purpose of customer interviews is to dig deeper into a customer’s survey responses, it’s best to schedule a time to talk — especially if you’re going to chat for 15 to 30 minutes.
We’ve found the best way to go about this is to ask customers within the survey if we can follow up with them. I love this question from Ryan Levesque of the Ask Method:
Last, I may follow up with a few people personally to learn a little more about your situation… Would you be open to chatting for a few minutes, on the condition I PROMISE not to try to sell you something?
Offer Yes and No survey options. If the respondent answers “Yes,” ask for their phone number.
2. Prepare your questions
Just like you’d do for a survey, prepare your questions and try to make them as objective as possible. The same advice for creating questions for surveys applies to creating questions for customer interviews.
Shanelle Mullin of ConversionXL emphasizes the importance of focusing on retention when interviewing existing customers:
Your goal should be to ensure your current customers stay customers. Were they happy with the initial purchase? What prompted them to make additional purchases, if any?
She recommends sticking to the “only ask for what you need” rule, like you’d do in a survey:
Your visitors and customers want to help you, but you can only ask so much of them. Think about how you’ll use the data you’ll collect from each question. If you won’t use it, don’t bother asking the question at all.
3. Do the interview
Show up on time, make the call, and remember that interviews are interactive! Don’t be afraid to ask follow-up questions to dig deeper into what your interviewee is saying.
Fellow optimizer Jen Havice recommends recording and transcribing interviews:
“Record the call and then have someone transcribe it in full. It’s too difficult and distracting to take adequate notes while you’re interviewing someone. You’ll spend precious cognitive energy just trying to figure out what to write down, and before you know it, you have stopped listening.
From a copy perspective, you can get amazing voice of customer data to pull from, but you need the words verbatim.”
A personal story of phone interviews gone wrong…
A few years ago, I was doing phone interviews to validate ideas for a company I was building. These interviews were part of what led me to build a product nobody wanted, which I learned one year and thousands of dollars too late.
Here’s what happened: Instead of hearing the warning signs from my interviewees (AKA the negative stuff I didn’t want to hear), I was only paying attention to the positive things they were saying. And subconsciously, I only remembered the things that validated my hypotheses.
Without knowing it, I was convinced people were telling me they needed the product I was building. The reality was they were saying the complete opposite.
This is called confirmation bias. ScienceDaily explains it as “the tendency to search for or to interpret information in a way that confirms one’s preconceptions, leading to statistical errors.”
Confirmation bias happens subconsciously, so it’s hard to catch ourselves doing it. But here’s a trick: Focus on the opposite of what you want to believe is true. Find reasons why your hypothesis could be false.
4. Analyze the responses
Finally, don’t forget you’ll have to analyze the interview responses. I recommend using a spreadsheet to record the answers to the questions you asked.
Then, codify and analyze the answers as you would with survey and poll answers. [Click here to jump to the How to Analyze section]
Gathering customer feedback shouldn’t be a rare event
Above, I’ve explained the process for conducting customer interview within the “formal” process of conversion research. It’s got a structure and a clear goal.
That said, you should talk to customers as often as possible — not just during your annual or biannual conversion research!
Every Thursday I get 6 or 7 names from our recent customers over the past 2 weeks, and I just call them.
Here’s the trick: If you are the founder or the CEO — don’t tell them.
If they know you are the founder or CEO, they won’t give you real feedback. They won’t want to hurt your feelings — knowing that you created the solution that they are using.
“Smile and Dial” is a way for me to really understand who my core customers are.
Customer interview tools we love
Method 6: Leverage Chat Logs
Chat logs are the customer goldmine you never considered
Does your website have a live chat tool? If it does, don’t ignore the questions customers ask in the chat.
Surveys and polls are obvious data-gathering methods — after all, that’s what they’re for.
Live chat platforms are ostensibly customer support tools… but the truth is that they’re not just valuable for customer service. They’re also excellent sources of customer data.
Why reference your chat logs?
For the same reasons you’d use polls and surveys: to discover “unknown unknowns,” to swipe the exact words and phrases your customers use, to understand how they shop, and so on…
On top of that, customers who can’t find the information they’re looking for or who are experiencing a technical problem often turn to live chat. The information a visitor shares while chatting with a support rep is info that you might not be able to capture with a poll or survey.
How to use chat logs to your best advantage
The majority of live chat tools allow you to tag and download the chat logs in PDF or CSV format.
1. Give the data a first pass
Once you download the log, you’ll want to read through the chats to get an overall impression of the data that’s in there. Aim to find patterns of similar questions that are being asked, or issues that are being brought up.
2. Categorize & organize
Just like you’d do during the codification of a survey or poll, you’ll want to create “buckets” of commonalities. Write down the common issues and questions people are asking through the chat, and add them to a spreadsheet
For example, if I read multiple times that customers are having trouble adding a certain item to their cart, I’ll create a bucket/spreadsheet column named “Bug: Item X can’t be added to cart”.
3. Codify the answers
Now that you’ve got the most frequent topics written down, you’ll want to codify the answers to see which ones come up most frequently. Read through the logs again and add a tally mark in the appropriate column/bucket every time something fitting a bucket is mentioned.
4. Extract the insights
Once you’re done, you should have multiple columns, each describing an issue or question you identified from the chat logs. In the cells below, each column should contain a series of tally marks representing how often each items were brought up.
At this point, add up the total for each column. Knowing the number of times each bucket issue was mentioned allows you to understand the importance of each, and prioritize accordingly.
Are there major objections that keep being brought up? Of the objections mentioned, which are most frequently mentioned?
What are the bugs customers experience most often? Fix those first.
Distill your data: How to analyze polls, customer surveys, and interviews
Most of the methods I explain in this guide generate open-ended, subjective insights that need to be analyzed.
In order to obtain statistically significant results (in other words, accurate data) at at least 95% of the time, you’ll need to analyze between 200 and 300 survey responses.
Sure, if we were lazy and only asked Yes/No or multiple-choice questions, our survey software would shuffle the data into pretty charts ready for our next board meeting presentation.
But no gorgeous chart has ever grown a business. To get the conversion direction we need, we have to do the work.
Fortunately, qualitative researchers have a method that makes this type of analysis more manageable. It’s called qualitative codification.
In The Coding Manual for Qualitative Researchers, Johnny Saldaña describes a “code” in qualitative research as “a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data.”
For the purposes of analyzing surveys and polls to identify conversion opportunities, “codes” are used as a means to categorize. We group codes into “buckets” of similar responses to the questions that were asked.
And because confirmation bias makes us more likely to pay more attention to the buckets we agree with or the ones we’re already aware of, counting how many survey responses fit into each bucket is critical.
For example, if we sent a survey to customers of an online tea shop and asked “What nearly stopped you from buying from us?”, after reading through the responses a few times, we might identify the following buckets:
- Not sure if tea is natural
- Shipping cost
- Can’t taste flavors
- Trustworthy company?
At this point, you have to re-read all the responses for this question, modify the buckets as needed, and assign each response to a bucket.
SurveyGizmo allows you to create buckets and analyze responses directly within their platform
In the case of this question (“What nearly stopped you from buying from us?”), counting the total number of mentions for each bucket will reveal the top reason why customers nearly didn’t buy. So if the natural origin of the products were the biggest worry, we’d know we should address this issue on the product pages, landing pages, and other marketing materials.
Coding surveys with multiple questions gives you an opportunity to extract even richer insights.
For example, say you ask a question like “What can you tell us about yourself?”. After codifying the responses, you could determine specific customer profiles.
Then, you could filter your data to see how respondents who fall into Customer Profile A answered Question 1 (and Question 2, 3, etc).
For example: Let’s go back to the online tea shop. If one of your customer profiles is nicknamed “Busy Moms,” you could sort your data with a few clicks to see how all those busy moms answered the question, “What qualities do you look for when you’re buying tea?”
It’s an easily accessible customer research motherlode that will give your copy and your marketing strategy a powerful, data-based boost.
What are you going to do next, ecommerce optimizer?
Relying on gut feelings, opinions, and “brainstorming sessions” to find solutions to your website’s conversion troubles won’t move the needle.
You’ll circle around and around… and later realize nothing has changed.
Being “data-driven” is a trendy claim. But being data-informed is so much more valuable.
If you aren’t working on this type of research for your company, I’m afraid you’re neither.
The best time to plant a tree was 20 years ago. The second best time is now.
– Chinese Proverb
Increasing conversions and revenue is not magic. There’s a method:
- Going deep into your analytics can uncover lost revenue opportunities and leaks in your funnels. Showing you where to focus to increase your conversions.
- Session recordings enable you to understand how people navigate your website, and pinpoint the areas where they experience issues.
- User testing allows you to test your assumptions about your website’s flow and user experience, while being able to understand your visitors’ thought process.
- Website polls deliver answers to your questions about specific areas of your website. Why aren’t people adding items to the cart? What are they searching for?
- Surveys and customer interviews allow you to go deeper in your customers’ minds. Get a front-row seat to understanding how they talk, how they think, and why they buy from you.
- Chat logs reveal issues, wants, and concerns that otherwise might not make it past your support team.
This conversion research process gives you the tools you need to achieve your revenue targets. At this point, there’s no excuse for not growing.
Is your company fueling its conversion optimization efforts with in-depth research? Do you have questions you’d like me to answer? Let me know in the comments below, or get in touch to see how I can help.