Cha-ching. There’s the sweet (virtual) sound of another ecommerce sale rolling in.
You might already be making millions from your ecommerce store. But is your site converting as highly as it should be?
Maybe you’ve tried “fixing” your conversion rate without much success.
Our conversion optimization agency has worked with ecommerce stores with modest sales all the way to Fortune 500 companies making over $100M, and we’ve found one thing holds true:
The most common cause of a mediocre conversion rate is a lack of proper research. [Tweet this]
Up to 75% of companies rely on nebulous “best practices” and cringeworthy “100 CRO Tests to Run”-style posts to guide their CRO efforts.
Even ecommerce teams who analyze their data daily often struggle to convert data into actionable optimization insights.
And according to Behave.org’s State of Online Testing Survey, over 50% of companies try to increase sales and conversions by simply getting together, brainstorming ideas, and implementing the discussed ideas to see if they have an impact.
AKA, more than half of online businesses approach optimization by saying “Who Knows? Let’s Try It And See What Happens!”
In most cases, any impact from this approach is minimal, if barely detectable.
And at the end of one cycle, the whole routine starts again. Sound familiar?
In this post, you’ll learn:
(I recommend bookmarking this page for future reference. Chances are you’ll want to come back here a few times.)
There are literally hundreds of possible reasons why your efforts to increase conversions aren’t working.
Here are a few of the most common issues:
Imagine you go to the doctor because your knee is killing you. But instead of addressing your knee, he gives you advice for healing headaches. Ridiculous, right? And totally irrelevant.
This is essentially how most ecommerce companies are addressing their conversion issues. Instead of segmenting their data and doing proper research to diagnose what’s hurting conversions, they start randomly changing things all over their website, hoping something will lead to an improvement.
For example: You’re not selling enough of Product X, so you decide to redesign its individual product page, thinking that’s the problem — when in fact, the real problem is hiding inside your checkout flow.
You wouldn’t try to heal a disease without knowing what the disease is and where in your body it’s located.
So why are you trying to fix your conversion problems without diagnosing them first?
In business, as in life, it’s often best to follow your gut.
In conversion optimization, your gut will waste your time, lead you astray, and can even exacerbate your conversion problem.
Have you ever attended a team meeting where you discussed what website changes to make?
Usually, one of two things happens in this type of meeting:
Personal opinions, gut feelings, and assumptions are the worst things you can possibly base your conversion efforts on.
But even if you’re doing it, you’re not alone. A study of nearly 800 marketers at Fortune 1000 companies found that the vast majority of marketers still rely too much on intuition.
As for the few who do rely on data? Well, for the most part, they’re not using it correctly.
Be critical. Question everything. Never assume.
If someone says you should move your website’s navigation menu from the top to the right side of the page — ask them why.
If someone says you should test a blue button instead of an orange button — ask them why.
If someone says, “Hey, I saw our competitors doing X. We should do X, too” — ask them why.
It’s one thing to make a design decision.
It’s something entirely different to fix a usability problem identified by analytics, user testing, session recordings, eye tracking, or all of the above.
Let me share a quick horror story.
A company we worked with was proud to say they were very data-driven.
They were sending out surveys, had their analytics set up perfectly, and were using Optimizely to A/B test.
The truth: it was all a smoke screen.
Even when this company was sending out surveys, they weren’t analyzing the responses to the extent they should have. They weren’t reading individual answers.
It was great that their analytics were well implemented, but they had no one on the team who could perform in-depth analyses.
As for A/B tests, we discovered their past 2 years of testing were a complete waste.
Instead of testing data-driven solutions to problems identified by research, they were launching tests based on their team’s gut feelings about what should be tested: button colors, random headline changes… nothing meaningful.
The result? They never achieved significant uplifts. Which means they made decisions based on false positives and false negatives, thinking these were the right, “data-driven” decisions.
Eventually, they decided conversion optimization just didn’t work for them.
We estimate that the two years they spent floundering resulted in millions of dollars lost.
So let me ask you: Are you truly making data-driven decisions? Or do you just believe you are? Instead of letting uncontextualized “data” steer your ship, aim to be data-informed.
This one is often the result of #1 and #2 combined. And over 75% of companies that test are guilty.
When you don’t know why your conversions aren’t higher, when your efforts aren’t moving the needle, when you’re not sure what to do, you might try taking shortcuts like…
It’s easy to assume that what your competitors are doing is working, but how do you know they’re not in the same pickle as you?
For example: What if the checkout flow your competitors are currently testing turns out to be the worst thing they’ve ever tried?
You don’t know what you don’t know. It’s good to be aware of what your competitors are dong, but copying them — when they might not have a clue what they’re doing — is a step in the wrong direction.
Now, about those impressive case studies you find all over the web.
You know, the ones claiming that changing the font from Cambria to Arial increased their conversions by 54%, or explaining how changing the size of their call to action button doubled revenue?
Sorry to say, but most of these are bullshit.
I’m not saying the results were made up. I AM saying that we have no idea how the majority of these case studies were tested.
Coming up with numbers isn’t hard. Arriving at numbers that can be trusted is a whole different story.
And in the case of “101 Things to Test”-style blog posts or how-tos, what worked for them might not work for you.
In case I haven’t made this clear yet: every website is different.
Different audiences, traffic sources, visitors’ purchase intent, geography…
For example: You might find a case study explaining how Amazon increased conversions significantly by changing their buttons from blue to yellow. And maybe they did.
But chances are you’ll never be able to replicate tests like this. Most websites don’t have enough traffic to test such small, specific changes.
Evan Miller’s sample size calculator is a good place to start to understand how much traffic your test variations will need.
Instead of assuming someone else’s tests or changes will work for you, find your own opportunities for conversions. Base your tests on your own research. That’s how you can move the needle.
Now that you know that…
You might be wondering exactly what your research should involve, and how to execute it so your work can finally be fruitful.
I’ll get into that shortly. Stick with me.
But first, I want to warn you that there’s no magic trick to this process. This is for serious optimizers who are ready to do the research and hard work. There’s no shortcut.
If you’re game, keep reading and I’ll walk you through the process you need.
Or, you can by get us to do this process for your ecommerce store by requesting your optimization proposal here.
The conversion research process ties together data from all the research methods I explain below.
You can use this process to create data-driven hypotheses about what to change or test on your website for maximum results.
If you’re just getting started with this type of research, you might find that trying to launch all of these methods at once can be overwhelming — especially if you don’t already have a clear process for analyzing the insights you’ll collect.
The process includes both quantitative and qualitative research methods.
Quantitative data can help find the biggest causes of low conversions, the holes in funnels, and the technical problems that can hurt your user experience. It’s the WHAT.
Qualitative data (think surveys, interviews, and review mining) is the most neglected piece of the puzzle. It’s the WHY.
Did you know that only 39% of companies that A/B test use surveys? Unfortunately, they’re missing one of the most important parts of the process.
You can do quantitative research to find the numbers that highlight problem areas. But your analytics won’t tell you why people aren’t converting.
Qualitative research helps you discover:
Let’s dig into what your customers are thinking and feeling.
Like the 95% of companies that test, you’ve probably already installed a digital analytics tool like Google Analytics.
The obvious thing is that you have to use it. The not-so-obvious thing is that many companies are only scratching the surface of what these tools can do.
Self-diagnose to see where you stand:
If you confidently answered yes to all of these questions, you’re already more advanced than many companies.
And here’s the real reason I asked: A lot of marketers report numbers instead of reporting insights.
Getting numbers is easy. Understanding the meaning of the numbers is the real task. For example, simply reporting a conversion rate is useless, because averages and percentages lie.
Instead, look at how your conversion rate is affected by different factors and segments, such as traffic sources, mobile vs desktop visitors, people who performed site searches, new vs returning…
I won’t dive too deep into how to use your analytics in this post, since:
The point I’m trying to make here is that simply being able to use Google Analytics and pull your numbers is not enough to increase conversions.
To find where your website needs attention, you have to understand why the numbers say what they say.
When you have mastered numbers, you will in fact no longer be reading numbers, any more than you read words when reading books. You will be reading meanings.
– W.E.B. Du Bois
When you’re able to use your analytics data to identify problem areas and opportunities, THEN you can use qualitative research to dive deeper into what’s causing those problems. This is why we created the Testing Trifecta method, which requires both types of data to be combined to achieve growth and A/B testing success.
Quantitative research tells you the “what”. Qualitative tells you the “why”. [Tweet this]
Keep reading to discover some of the most useful approaches to conversion optimization research.
Session recordings enable you to record your website visitor’s screen as he browses your site. Visitors are recorded anonymously, and there’s no audio.
By adding a simple line of code to the header of your website, you can record visitor behavior, including clicks, mouse movements, and typed text.
Then, your optimization team can review the recordings to detect conversion friction points and potential problems with your site’s user experience.
Simply asking your users questions is not enough. Why? Human beings have the tendency to say things that are the opposite of how they actually behave.
We’re also pretty terrible at predicting how we’ll feel and react to a given situation.
That’s why watching session recordings can be so helpful. They help you uncover the “unknown unknowns,” which can lead to “aha!” moments.
Pardon me for this analogy, but I like to compare traditional user testing (where the users know they’re being watched) to watching animals in a zoo. You’re observing their behaviors within a limited, constructed space. The animals can see you, and they might adjust their behavior accordingly.
Session recordings, on the other hand, are like watching animals in their natural habitats. When they’re unaware of our presence and unaffected by our guidance, we can observe how they really behave.
We were recently watching visitors go through the checkout flow of a client’s website.
At some steps of the checkout, abandonment rates were high, so we knew we had to dig deeper.
After watching a few hundred session recordings, we discovered that about 15% of the time that someone was going through the checkout, the “Proceed to Next Step” button that led users to the payment step would disappear.
This was a huge find — something that we wouldn’t have been able to figure out using only analytics. The arbitrary disappearance of this button meant that users were getting stuck in checkout, unable to complete their purchase!
Our client had no idea, and it was likely costing them hundreds or thousands of dollars every month.
In another situation, we detected friction at the payment step of a checkout flow. We used session recordings to see what was going on (censoring all sensitive info).
We noticed that many users were entering their credit card number in the “Name on Card” field, which was the first field in the payment step. This resulted in the user having to erase their number and retype it in the proper field once they noticed.
It was friction: a clear usability problem.
Knowing the problem, we were able to completely eliminate it by simply swapping the order of the payment fields.
With the tools available today, session recordings are virtually painless to set up.
Most tools will give you a snippet of code to insert in the <head> section of your website, and that’s it! The tool will start recording after the code is installed.
Know that some tools will automatically censor fields containing sensitive information (such as credit card fields), but a few others require you to add a line of code to a field to censor that information.
Analysis is crucial. Don’t just sit on those valuable insights.
Here are a few tips for setting up session recordings:
Start by using your analytics to detect problem areas on your website that you’d like to explore. Although you can watch generic session recordings (which could be good to uncover some unknown unknowns) for efficiency’s sake, we recommend focusing on the areas where you already know there’s a major problem
That being said, problem area or not, we tend to always record the checkout flow. It’s the most critical point in your user’s path and can provide valuable insights.
Once you find your problem areas, we recommend seeing if the problem affects all of your key segments. Is it a an issue unique to a specific device or browser type? To new visitors?
Knowing your target segments is key, since watching session recordings is very time-consuming. Having this info on hand allows you to adjust the filters in your session recording tool so you only see recordings that are related to the problem.
If you’re using session recordings to dig into an issue that applies to both mobile and desktop, watch them in separate groups. For example, begin by watching 100 desktop sessions, then switch to mobile sessions. Batching sessions makes it easier to detect common behavioral patterns.
Create a document to write down all of your notes, questions, and insights as you’re watching. Divide your document by devices and website sections to keep your insights organized in their specific categories.
Most session recording tools allow you to add tags, notes, and stars to recorded sessions. Although we suggest you keep all your notes in a standalone document, as finding notes in separate session recordings can be a hassle, use the tags as appropriate and star important recordings.
(Why? If your team asks you to show them the recording of a specific issue you detected, I doubt you’ll want to rewatch everything to find it. Stars and tags help quite a bit.)
For example, if we analyze a checkout flow, we’ll watch a minimum of 100 recordings per device type. Most tools allow you to skip moments where the visitors pause, and to increase the speed of the video. Consider asking a team member to analyze a batch with you to speed up the process.
Yes, analyzing session recordings takes time (as do most qualitative research methods), but it pays off.
Imagine if we’d skipped session recordings for the client with the disappearing checkout button problem. It might have flown under the radar for much longer, costing them millions.
No time to do all of this yourself? Need help getting it done? We can help you, as we’ve helped companies like L’Oréal, Frank + Oak and Kiehl’s. Request your free proposal here.
Though user testing is popular with software companies, I’ve noticed ecommerce companies still have some catching up to do with this method.
User testing allows you to evaluate the usability of your website by observing how people complete assigned, specific tasks on the site.
You might ask users to:
Enlisting users (who match your desired demographics, of course) to execute tasks helps you understand how your target market uses and navigates your website and its features.
Is your site user-friendly, or unintuitive? Are visitors unsure where to click? Are they having problems checking out?
You might be thinking that user testing sounds similar to session recording, and they’re not too far apart. The biggest difference is that people participating in a user test are aware of it, while session recordings happen in the background as visitors are browsing.
Not only are people participating in a user test aware of it, they also have to execute specific tasks and describe their actions and thoughts aloud. In some cases, users are recorded on video so the optimizer can later analyze their facial patterns as they complete the given tasks.
You are not your user. [Tweet this]
While some features of your website and its usability may seem obvious to you and your team, they might not be obvious or simple to first-time visitors.
As you watch other people navigate your website, you might start noticing that some people aren’t using certain elements as intended. And some might have trouble executing an action that seems intuitive to you.
Whether that action is filling a form, finding the product or information they’re looking for, or figuring out how to complete checkout, user testing is incredibly useful to quickly reveal flaws in your user experience.
Knowing those flaws gives you the power to fix them, improve the user experience, and hopefully increase your conversions.
User testing can be done either in person, or online. Let’s go over some pros and cons of each.
I personally prefer UserTesting.com. It’s painless, quick, and gives me the insights I need.
In my opinion, the benefits of conducting in-person user tests over online user tests (for the purpose of conversion optimization) are marginal.
Generally, you should recruit testers who are part of your target audience. If you’re using online tools, you’ll have the chance to pick the tester’s attributes and required qualifications while creating your user test.
Most of the time, these tools also let you…
Five to 10 testers is enough in most cases. Be aware that if you’re running user tests for both mobile and desktop, you should run 5 to 10 tests for each device.
Feel free to use more testers if you want, but know that more than 15 testers is unnecessary and can diminish your ROI.
List your website’s micro-conversions. These are small actions, such as creating an account or adding an item to the cart, that lead to the macro-conversion (a purchase).
Break it down like this:
The purchase can’t happen if User Jenny doesn’t go through all of the checkout steps…
and Jenny can’t go through checkout if she didn’t add an item to her cart…
and she can’t add an item to her cart if she can’t find what she’s looking for…
Map out every important action a visitor has to take to make a purchase. This will help you define the tasks you want your testers to do.
ConversionXL recommends defining 3 types of tasks:
Tasks are blunt, one-sentence action items for testers.
Scenarios give tasks context to engage your testers.
For example, if your task is “Find a diamond-encrusted silver bracelet,” the scenario could be “It’s your wife’s birthday in a week, and she’s been dreaming of a pretty new bracelet.”
UserBrain offers a great tip for getting the most value out of your users’ responses:
Avoid asking people what they think by writing something like “What is your first impression of …” or “What do you think about …”.
Testers will only comment on things like color schemes, font choices, layouts, and other visual design elements.
And while this kind of information might be interesting to you, it’s not something you need to hear from usability test participants.
Once your tests are complete, make sure to watch each recording attentively and take notes in a separate document.
For example, a tester may obviously struggle while going through checkout — but in the post-test questions, he’ll state that he encountered no problems at all.
In other words, pay more attention to users’ behavior than to what they say.
Website polls most frequently appear as little panels at the bottom left or right of a web page. They’re essentially mini one- or two-question surveys.
Optimizers can ask website visitors multiple-choice, single-answer or open-ended questions at during specific moments of their visit.
Polls are non-intrusive, and — if they’re set up correctly — can be very effective at getting quick and valuable answers.
it takes very little time for visitors to answer questions, since polls can appear at the exact moment on the page when a question needs to be asked.
Plus, they’re versatile and easy to launch. (Can you tell we love polls?)
Polls gather great insights (quickly and usually painlessly) and you can ask your visitors questions at the exact moment and location you need their insight.
For example, if you’re trying to determine why a Product A has abnormally low add-to-cart rates, you can display a poll on the Product A page asking questions like “Were you able to find the information you were looking for?” or “What’s holding you back from purchasing this item?”
Unlike traditional surveys, polls don’t require you to craft an email, send that email, and provide incentives to your users just to get answers. Not to mention that you can only send traditional surveys to people who’ve already given you their email addresses.
So how do you ask questions of people who haven’t converted into leads — who are still simply visitors? You launch a poll.
One of our clients ran a PPC campaign that led visitors to a long, very detailed landing page.
There were no distractions on the landing page, since it was a sales funnel for one product. Most visitors read the whole page and scrolled all the way down to the “Buy” button at the bottom of the page.
With visitors now being aware of the price, you’d think the checkout process should convert well.
But when we looked at the client’s analytics, we saw that over 50% of people dropped off when they reached the first step of checkout (where they had to decide their order quantity).
Using analytics, we knew the “What”. Now we needed to know the “Why”.
As part of our landing page optimization process, we launched a poll on this step of the checkout asking visitors “Is there anything holding you back from making a purchase?”
If they answered “Yes,” we asked, “What’s holding you back from making a purchase?”
Over a few hours, we accumulated more than 500 responses. After carefully reading and analyzing every single response, we found quite a few issues.
The main issue was that there was a lack of clarity in the copy on that page. Visitors were confused about whether their order was a one-time purchase or a monthly subscription (when in fact, both options were available).
We formulated a hypothesis tackling this problem, launched A/B test variations of the page, and after three weeks, the results were in. One of our test variations led to a 23% increase in orders.
So — using a quick poll that we manually analyzed and codified, we got insights that helped us determine a page’s top issues, launch a data-driven A/B test, and generate a significant increase in revenue.
My friend Talia Wolf recently explained:
While most experts say you should be data-driven, I believe you should be data-informed, yet customer-driven. Your number-one goal should be: make it about the customer. (via KlientBoost)
Fortunately, that’s exactly what polls and surveys allow you to do.
Installing a website polling tool is as simple as installing a snippet of code in your website’s header.
However, choosing the right questions and analyzing the answers — although it’s nowhere near as complex as creating a full-length survey — still requires time, effort, and a few tricks.
(See the pattern here? There’s no shortcut to doing proper research).
Similarly to session recordings, you should dive into your analytics and find a problem area to tackle using polls before deciding to install them. Polling for the sake of polling is useless. And remember that poll answers require manual analysis and coding… so more polls = more time needed for analysis.
Once you’ve found a page or problem area you want to tackle, make sure you have a goal in mind. What are you trying to learn? What are you trying to solve? With a clear objective in mind, it’s time to create the poll.
Polling tools make creating polls easy. Picking the right question(s) is where complexity rears its familiar head.
The questions you ask in your poll are directly related to the quality and usefulness of the answers you receive.
You should have a goal in mind, and a specific problem to investigate. Use these to create a short, concise question for your poll.
Harvard’s program on survey research notes that the ideal question accomplishes three goals:
It’s also important to be as neutral as possible to avoid survey bias (where the respondent’s answer is skewed due to the way the question is formulated).
Pro tip: Avoid starting your questions with the word “Why,” as it can sound accusatory. Subconsciously, the respondent will feel the need to defend himself. Start with “What,” “Was,” or “How” instead.
Your question will vary based on what you’re looking to learn, so there’s no one-size-fits-all question that works all the time.
That being said, there are still some questions we end up using more often than others, such as:
Since your main question is open-ended, your poll will display a text input field for users to type their answers. Filling it out represents a commitment for visitors, especially when there’s no incentive offered for responding.
This doesn’t mean you should avoid open-ended questions in your polls. The trick is to preface them with a Yes/No question to initiate engagement. For example, you could begin by asking “Is there anything holding you back from buying?” and display Yes or No answer options.
If the user clicks No, the poll disappears. But if he clicks Yes, the question could change to “What’s holding you back from buying?” and offer a text input field.
Here’s why this is a great trick: The Yes or No answer doesn’t matter at all. It’s not an insight of any kind. However, it DOES reduce the perceived effort required to answer the poll.
Clicking Yes or No is simple. Typing an answer takes more time. By starting with the Yes or No, you’re enticing more visitors to take the survey. You’re engaging them, and making them more likely to take the few extra seconds to type a real answer.
Now that you have your question, and your question logic is set up, you’re almost ready to launch your poll. But there’s one more thing you should determine: who do you want to see the poll?
Some questions might not apply to all the different visitor segments. Determine on which devices your poll should be displayed, whether your poll should display to new or returning visitors only, to visitors arriving from a certain traffic source, or to everyone.
Also determine if your poll should show immediately when a visitor reaches the page with the poll; after X seconds; when the visitor takes a defined action; or only on exit intent.
Jen Havice of Make Mention Media says that improper timing is one of the biggest mistakes businesses make when they use polls:
“They’re either having the polls pop up too soon or on pages not relevant to the poll. For instance, if you want to find out why people are leaving without making a purchase, don’t ask them while they are still actively navigating through your site. Wait until they have gotten into the checkout process and then use an exit-intent, one-question pop-up.”
Finally, once your poll has accumulated enough responses, you’ll have to analyze them. This is as important as creating the poll itself, but often overlooked.
Analysts are prone to multiple bias, analysis fatigue, and misinterpretation. Correct analysis is key, so we’ll dive into how to analyze polls and survey responses below.
While website polls usually target both new visitors and existing customers, customer surveys are sent via email to your existing customers.
Surveys are much more complex than polls. They ideally contain 8 to 10 questions and may offer an incentive to customers to encourage participation.
Plus, creating survey questions requires more thought, since there may be a greater risk of writin biased questions that can invalidate response data.
Questions are usually a mix of open-ended and single- and multiple-choice questions that are mostly used to segment data during analysis.
Send surveys to existing customers specifically to discover what made them convert. [Tweet this]
By asking a few select questions, you can find out what made them hesitate before buying, why they bought, and who they are as a customer.
This crucial info helps you define your different customer types, nail down their buying attributes, and find correlations between common survey answers and customer types.
We launched a survey for a client that sells natural skin care products. We wanted to determine why customers bought what they bought, if there was anything that made them hesitate, and who they were as customers.
The client sent the 9-question survey to customers who had bought from them in the past month. As an incentive to take the survey, they offered three $50 gift cards to be randomly awarded in three days.
Three days and over 5000 responses later, not only had we determined five main types of customers, from “Busy, well-paid moms with young kids. Focusing on making health a priority and looking to get rid of chemicals” to “Crossfit athletes where natural products are no-brainers, but having a hard time finding ones of great quality”…
… but for each of these customer types, we had also identified (among many other things):
In the end, we extracted over 50 major data-driven action items from the survey responses.
This data is now being used to improve the messaging of the client’s social media marketing, increase the effectiveness of their PPC campaigns, and create landing pages with higher relevance for each audience.
Ultimately, this data was used as the backbone of a website redesign, which nearly instantly led to a 22% sales increase.
How is this possible, you ask? Simple: This data allowed us to create highly targeted messages such as product descriptions and visuals that were relevant to the client’s audience. We were able to address objections and doubts before they even became problems.
Creating effective, unbiased surveys that result in the types of actionable insights we got in the case study described above isn’t as easy as it sounds.
Survey design is a science. There are many things to watch out for, and one single wrong word in a question can skew all the answers.
We won’t cover every aspect of survey design in this post, but here are some of the main action items:
Begin the survey with one or two simple questions (either single or multiple-choice) to reduce the perceived effort of taking the survey, much like you’d do with a website poll.
For example, if you’re selling items for both genders, you could ask “Are you a man or a woman?”. This can also be useful for analysis, as you’ll be able to see how the answers to your other questions vary for each gender.
When you’re creating additional questions, don’t forget your business goals. It’s not uncommon to see surveys from large companies full of useless questions that were probably added last-minute by some executives.
You want to limit your questions to 10 to encourage completion, so it’s imperative to keep them as specific as possible and ensure the insights you get are useful and actionable.
A quick trick to see if the questions you come up with are even worth considering: ask yourself “What type of information can I glean from these responses?” and then, more importantly, “What will this information enable me to do?”
Are you asking something that’s nice to know, or something actionable that will inform your strategy and deepen your customer knowledge?
Paul Dunstone of Feedback Lite recommends focusing on specific questions that are designed to produce specific outcomes. Instead of open-ended questions, such as “How can we improve our checkout page?” he suggests a question like “Are there any parts of our checkout process you find difficult to complete?”
The difference seems minor, but the latter is more specific and will produce answers that are precise and actionable.
He also recommends including mostly multiple-choice questions instead of open-ended questions to make answers easier to analyze and turn into stats.
Now, there are many different opinions on this approach, but like many other conversion optimizers, I disagree with asking mostly multiple-choice questions.
Limited questions produce limited answers, and your goal shouldn’t be to make your survey easy to analyze. The goal of your survey is to get insights that will lead to business growth. I don’t care if it takes 10 hours instead of one to analyze qualitative answers.
Let’s say you’re a clothing company. You ask your customers, “What is the main thing (Company) should improve?” and give multiple choices:
This question would largely limit the quality of the insights you can gather. Because you work at this company, you wrote down answer choices based on your assumptions about what customers say are the biggest flaws.
Even if they’re things you hear frequently, humans have the tendency to remember the things they also agree with the most. It’s confirmation bias at its most basic.
But what if most of your customers have issues with the sizing of the clothes? Or had a problem during checkout, and that’s what they want (or need!) to see improved?
Only open-ended survey questions can reveal insights that you didn’t consider. [Tweet this]
If you have a good process for codifying your survey data, there’s no reason why text answers can’t be turned into statistics either.
Alex Birkett of ConversionXL mentions a few questions they use (and that we at SplitBase also love). Here are a few:
Let’s admit it, we’re plagued by biases. Being objective can be a challenge, but it’s critical when creating survey questions.
On the Shopify Plus blog, Ott Niggulis advises:
…avoid universal words — like “always,” “never,” “only,” and “just” — as well as imprecise words — like “often,” “usually,” “generally.” For one person, the word “often” could mean once or twice a week, while for someone else it could be once or twice a month.
You must also avoid leading questions, or those that lead people toward answering in a particular manner. SurveyMonkey gives great examples of leading questions:
“1. Bad Question: How short was Napoleon?
The word “short” immediately brings images to the mind of the respondent. If the question is rewritten to be neutral-sounding, it can eliminate the leading bias.
Good Question: How would you describe Napoleon’s height?"
2. Bad Question: Should concerned parents use infant car seats?
The term “concerned parents” leads the respondent away from the topic at hand. Instead, stay focused by only including what is needed in the question.
Good Question: Do you think special car seats should be required for infant passengers?”
Once you’ve created the survey, it’s time to send it. This sounds like the easy part, and it’s true that it’s not as complex as the actual survey creation, but keep a few things in mind to maximize responses:
This isn’t an ecommerce survey email, but it’s so good I couldn’t resist sharing it:
Here’s why it’s great:
And here’s an example of a terrible survey email:
It’s awful because…
Can you believe this email and survey were created by a “leading” market research firm? I’ll say it — it’s sad.
We frequently send surveys on behalf of clients who contract our conversion optimization services.
At this point, we’ve sent over a million of these emails and tested many different formats. Although the email should be personalized to match your company’s tone and “voice of customer,” here’s a template that we’ve found works great:
When you first shopped with [Company], we promised we would leave no stone unturned in providing [short, benefit-driven description of the company’s products].
And we want to make sure we keep that promise.
If you could take just 3 minutes and tell us a little more about your [Company] experience by answering a few quick questions…
We’ll put your name into our draw and you could win one of three $50 [Company] gift cards.
Here’s the link to the few questions we prepared for you:
[Link to Survey]
Fill it out before [Date in 3 Days], and get a chance to win!
Thank you so much,
No time to do all of this yourself? Need help getting it done? We can help you, as we’ve helped companies like L’Oréal, Frank + Oak and Kiehl’s. Request your free proposal here.
Phase 1 of customer surveys is creating and sending the survey. Phase 2 is analyzing responses.
Just as with polls, analyzing answers to open-ended questions will take time, but that’s where the real insights are hiding.
We use a codification methodology to turn our qualitative insights into actionable data. We’ll get into the details of how to do this below (or click here to read it now).
Customer interviews are essentially customer surveys that you conduct in-person or over the phone.
In addition to all the reasons you’d want to launch a website poll or send surveys to your existing customers, customer interviews give you the flexibility of deciding which questions to ask as the interview goes on.
This approach is particularly useful after doing an online customer survey. If you want to know more about a customer’s specific survey answer, you can follow up with a call (with their permission), ask for clarification, and ask additional questions you didn’t include in the survey.
The interviews we conduct for clients usually last between 15 and 30 minutes.
You can do interviews over the phone, in person, using Skype…
We prefer in-person or via Skype, so we can see the interviewee’s real-time emotional reactions.
Sure, you could pick up the phone right now and dial any contact you find in your CRM.
But because the main purpose of customer interviews is to dig deeper into a customer’s survey responses, it’s best to schedule a time to talk — especially if you’re going to chat for 15 to 30 minutes.
We’ve found the best way to go about this is to ask customers within the survey if we can follow up with them. I love this question from Ryan Levesque of the Ask Method:
Last, I may follow up with a few people personally to learn a little more about your situation… Would you be open to chatting for a few minutes, on the condition I PROMISE not to try to sell you something?
Offer Yes and No survey options. If the respondent answers “Yes,” ask for their phone number.
Just like you’d do for a survey, prepare your questions and try to make them as objective as possible. The same advice for creating questions for surveys applies to creating questions for customer interviews.
Shanelle Mullin of ConversionXL emphasizes the importance of focusing on retention when interviewing existing customers:
Your goal should be to ensure your current customers stay customers. Were they happy with the initial purchase? What prompted them to make additional purchases, if any?
She recommends sticking to the “only ask for what you need” rule, like you’d do in a survey:
Your visitors and customers want to help you, but you can only ask so much of them. Think about how you’ll use the data you’ll collect from each question. If you won’t use it, don’t bother asking the question at all.
Show up on time, make the call, and remember that interviews are interactive! Don’t be afraid to ask follow-up questions to dig deeper into what your interviewee is saying.
Fellow optimizer Jen Havice recommends recording and transcribing interviews:
“Record the call and then have someone transcribe it in full. It’s too difficult and distracting to take adequate notes while you’re interviewing someone. You’ll spend precious cognitive energy just trying to figure out what to write down, and before you know it, you have stopped listening.
From a copy perspective, you can get amazing voice of customer data to pull from, but you need the words verbatim.”
A few years ago, I was doing phone interviews to validate ideas for a company I was building. These interviews were part of what led me to build a product nobody wanted, which I learned one year and thousands of dollars too late.
Here’s what happened: Instead of hearing the warning signs from my interviewees (AKA the negative stuff I didn’t want to hear), I was only paying attention to the positive things they were saying. And subconsciously, I only remembered the things that validated my hypotheses.
Without knowing it, I was convinced people were telling me they needed the product I was building. The reality was they were saying the complete opposite.
This is called confirmation bias. ScienceDaily explains it as “the tendency to search for or to interpret information in a way that confirms one’s preconceptions, leading to statistical errors.”
Confirmation bias happens subconsciously, so it’s hard to catch ourselves doing it. But here’s a trick: Focus on the opposite of what you want to believe is true. Find reasons why your hypothesis could be false.
Finally, don’t forget you’ll have to analyze the interview responses. I recommend using a spreadsheet to record the answers to the questions you asked.
Then, codify and analyze the answers as you would with survey and poll answers. [Click here to jump to the How to Analyze section]
Above, I’ve explained the process for conducting customer interview within the “formal” process of conversion research. It’s got a structure and a clear goal.
That said, you should talk to customers as often as possible — not just during your annual or biannual conversion research!
Every Thursday I get 6 or 7 names from our recent customers over the past 2 weeks, and I just call them.
Here’s the trick: If you are the founder or the CEO — don’t tell them.
If they know you are the founder or CEO, they won’t give you real feedback. They won’t want to hurt your feelings — knowing that you created the solution that they are using
“Smile and Dial” is a way for me to really understand who my core customers are.
Does your website have a live chat tool? If it does, don’t ignore the questions customers ask in the chat.
Surveys and polls are obvious data-gathering methods — after all, that’s what they’re for.
Live chat platforms are ostensibly customer support tools… but the truth is that they’re not just valuable for customer service. They’re also excellent sources of customer data.
For the same reasons you’d use polls and surveys: to discover “unknown unknowns,” to swipe the exact words and phrases your customers use, to understand how they shop, and so on…
On top of that, customers who can’t find the information they’re looking for or who are experiencing a technical problem often turn to live chat. The information a visitor shares while chatting with a support rep is info that you might not be able to capture with a poll or survey.
The majority of live chat tools allow you to tag and download the chat logs in PDF or CSV format.
Once you download the log, you’ll want to read through the chats to get an overall impression of the data that’s in there. Aim to find patterns of similar questions that are being asked, or issues that are being brought up.
Just like you’d do during the codification of a survey or poll, you’ll want to create “buckets” of commonalities. Write down the common issues and questions people are asking through the chat, and add them to a spreadsheet
For example, if I read multiple times that customers are having trouble adding a certain item to their cart, I’ll create a bucket/spreadsheet column named “Bug: Item X can’t be added to cart”.
Now that you’ve got the most frequent topics written down, you’ll want to codify the answers to see which ones come up most frequently. Read through the logs again and add a tally mark in the appropriate column/bucket every time something fitting a bucket is mentioned.
Once you’re done, you should have multiple columns, each describing an issue or question you identified from the chat logs. In the cells below, each column should contain a series of tally marks representing how often each items were brought up.
At this point, add up the total for each column. Knowing the number of times each bucket issue was mentioned allows you to understand the importance of each, and prioritize accordingly.
Are there major objections that keep being brought up? Of the objections mentioned, which are most frequently mentioned?
What are the bugs customers experience most often? Fix those first.
Most of the methods I explain in this guide generate open-ended, subjective insights that need to be analyzed.
In order to obtain statistically significant results (in other words, accurate data) at at least 95% of the time, you’ll need to analyze between 200 and 300 survey responses.
Sure, if we were lazy and only asked Yes/No or multiple-choice questions, our survey software would shuffle the data into pretty charts ready for our next board meeting presentation.
But no gorgeous chart has ever grown a business. To get the conversion direction we need, we have to do the work.
Fortunately, qualitative researchers have a method that makes this type of analysis more manageable. It’s called qualitative codification.
In The Coding Manual for Qualitative Researchers, Johnny Saldaña describes a “code” in qualitative research as “a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data.”
For the purposes of analyzing surveys and polls to identify conversion opportunities, “codes” are used as a means to categorize. We group codes into “buckets” of similar responses to the questions that were asked.
And because confirmation bias makes us more likely to pay more attention to the buckets we agree with or the ones we’re already aware of, counting how many survey responses fit into each bucket is critical.
For example, if we sent a survey to customers of an online tea shop and asked “What nearly stopped you from buying from us?”, after reading through the responses a few times, we might identify the following buckets:
At this point, you have to re-read all the responses for this question, modify the buckets as needed, and assign each response to a bucket.
SurveyGizmo allows you to create buckets and analyze responses directly within their platform
In the case of this question (“What nearly stopped you from buying from us?”), counting the total number of mentions for each bucket will reveal the top reason why customers nearly didn’t buy. So if the natural origin of the products were the biggest worry, we’d know we should address this issue on the product pages, landing pages, and other marketing materials.
Coding surveys with multiple questions gives you an opportunity to extract even richer insights.
For example, say you ask a question like “What can you tell us about yourself?”. After codifying the responses, you could determine specific customer profiles.
Then, you could filter your data to see how respondents who fall into Customer Profile A answered Question 1 (and Question 2, 3, etc).
For example: Let’s go back to the online tea shop. If one of your customer profiles is nicknamed “Busy Moms,” you could sort your data with a few clicks to see how all those busy moms answered the question, “What qualities do you look for when you’re buying tea?”
It’s an easily accessible customer research motherlode that will give your copy and your marketing strategy a powerful, data-based boost.
Relying on gut feelings, opinions, and “brainstorming sessions” to find solutions to your website’s conversion troubles won’t move the needle.
You’ll circle around and around… and later realize nothing has changed.
Being “data-driven” is a trendy claim. But being data-informed is so much more valuable.
If you aren’t working on this type of research for your company, I’m afraid you’re neither.
The best time to plant a tree was 20 years ago. The second best time is now.
– Chinese Proverb
Increasing conversions and revenue is not magic. There’s a method:
This conversion research process gives you the tools you need to achieve your revenue targets. At this point, there’s no excuse for not growing.
Is your company fueling its conversion optimization efforts with in-depth research? Do you have questions you’d like me to answer? Let me know in the comments below, or get in touch to see how I can help.
Unlock the secrets to CRO mobile optimization and e-commerce success. Learn proven strategies to optimize your mobile site or app in our complete guide.