An A/B Testing Process to Get Consistant Result From Your CRO Efforts

Do you equate conversion optimization with A/B testing? Or is A/B testing your main strategy for increasing conversions?

If you answered yes to either of those questions, I’m ready to bet the results of your testing efforts could be considered “average” at best…

Am I right?

If your tests return lackluster results, you’re not alone. As a conversion optimization agency specialized in luxury, fashion and lifestyle ecommerce, SplitBase talks with many companies that share similar frustrations.

And plenty of digital marketing teams A/B test only because they know they should be testing. (Not exactly a recipe for rousing success.)

Ready for some real talk? If you want to increase conversions, A/B testing is only a part of the process.

Conversion Rate Optimization Framework

Unfortunately, testing tools market themselves as magic spells to increase conversions. “Tools that anyone can use”… and boom! More money in your pocket!

This couldn’t be further from the truth.

Misleading marketing brings companies to spend thousands of dollars every month on testing tools, while only using them on an ad-hoc basis. Often, due to less-than-impressive results, these tools quickly become afterthoughts.

Here’s the deal: A/B testing is a way to validate your conversion optimization hypotheses. It’s NOT a way to directly increase conversions.

A/B testing is a method to see if your strategy for increasing conversions, increasing your average order value, reducing cart abandonments, or whatever your goal might be, is indeed the right strategy. That’s it.

So… If A/B testing isn’t the answer, what is?

The solution to your lackluster test results isn’t a tactic (which is what A/B testing is on its own).

You need a framework that helps you decide what to test — so you’re not just running tests for the hell of it.

“The quality of your hypotheses will determine the quality of your tests, which will determine the quality of your results.”

I’ll get to the framework shortly.

But before I do, I want you to remember that both winning and losing tests are good. What matters most is what you can learn from them.

Think of testing like a science experiment:

1. You create a hypothesis

2. You perform the experiment (AKA run the test)

3. You document the results and what you’ve learned.

If the experiment “works,” meaning it increases conversions, great! Document the results and look over what you’ve learned.

If it doesn’t work, analyze what went wrong, and what did work. Then, tweak your hypothesis in preparation to re-launch the experiment using what you’ve learned.

When you think of it this way, you’ll realize that A/B testing is research. It’s a means to an end, not an end in and of itself.

Your goal should be to learn from your users as you test. Apply your new insights as needed when you’re iterating a test or implementing new elements on a website.

A/B testing is an important tool in conversion optimization…

But only when you’re following the right process.

Simply implementing website changes that you or your team “think” are improvements is ill-advised, and can even put your sales performance in great danger.

In the majority of cases, changes should be tested. Without testing, you could make a change (big or small) that could lose you a lot of money.

Gut feelings, opinions, assumptions based what competitors are doing… whether these come from your top designer, marketer, CEO, or even your conversion optimizer, they should never drive the changes you make to your website.

Is your company guilty of following a feeling?

The only way to consistently increase conversions is by using a framework

As I mentioned earlier, A/B testing can be a game-changer if you have well-thought-out, strategic hypotheses for what you want to test.

But how do you know what to test? How do you come up with a high-quality hypothesis?

This is where I want to show you a 10,000ft view of our own methodology. We call it the Testing Trifecta.

When we execute the Testing Trifecta methodology on our clients' ecommerce sites, we generally find a slew of potential improvements to increase sales, the website’s problem areas that are costing them conversions, and of course, hundreds to A/B testing ideas they can use to increase conversions and profits.

Testing-Trifecta.png

Tackle the base first to support your testing

In the above Testing Trifecta graphic, there’s a reason why Qualitative Research and Quantitative Research are the two bottom circles. It’s because these two types of research form the base that supports testing.

Qualitative and quantitative research can each be performed by themselves. You’ll undoubtedly get great insights into your customers’ demographics, behavior, desires, fears, you name it.

But if you want to create experiments that drive business growth, you need to perform both types of research, then design research-based hypotheses.

Only then should you begin testing.

Why do you need to do both types of research? Well, let’s look at what happens when you only use quantitative research for conversion optimization (this is the trap many optimizers fall into).

Quantitative research means deep-diving into your analytics. It’s a must for conversion optimization. You have to understand what’s working and what’s not, and proper analytics analysis can reveal countless opportunities for optimization everywhere on your website.

Now, as great as this sounds, basing your test hypotheses purely on quantitative research is limiting. The quality of your hypotheses will suffer. This means your tests will be weaker, and you’ll lose valuable testing time.

Quantitative research on its own is limiting because, simply put, your analytics are only one side of the story.

For example, you might discover…

  • That your most-used PPC landing page only converts at 2.4%, and you’re losing money by driving traffic to it…
  • That you’re getting an unusual bounce rate on specific product pages…
  • Or that the login step of your checkout flow has a huge drop-off rate that needs immediate attention.

This is all great insight, because Step 1 of optimization is about knowing what’s wrong.

But quantitative research does not tell you WHY your landing page is converting badly, or WHY the login step of checkout is experiencing so much abandonment.

The numbers tell you what is happening, but not why it’s happening.

Sure, any test hypothesis generated based on quantitative research will still be more useful than a hypothesis based on nothing but you’ll still be guessing at what solution to test, because you don’t know the root cause of the problem.

This is the equivalent of going to the doctor because you have a migraine, and the doctor giving you a bunch of drugs that “might” work, depending on the unknown root cause of your pain.

You wouldn’t be comfortable with guessing when it comes to your health — so why would you be comfortable with guessing about your business revenue?

Prescription without diagnosis is malpractice
– unknown

Enter qualitative research: the yin to quantitative research’s yang.

Qualitative and Quantitative Research in Conversion Optimization

Qualitative research in conversion optimization includes things like:

  • User testing
  • Analyzing session recordings
  • Mouse tracking
  • Website polls and surveys
  • Customer interviews and surveys
  • Analyzing chat logs
  • And even retail store visits

I explain how to use these methods to gather golden insights in a post about qualitative research for CRO.

What I love about qualitative research is that it enables you to understand WHY certain things are happening. You get to find out why your customers aren’t converting, why they’re dropping off at certain steps, their biggest objections, and even how they behave on your website.

It’s often more effort to execute and analyze qualitative research than it is to analyze quantitative research, which is why many companies and agencies neglect it.

Now, you know you shouldn’t use quantitative research alone, but you shouldn’t use qualitative research by itself either…

If qualitative research is so valuable, why shouldn’t you use it by itself to generate test hypotheses?

One word: efficiency.

Without a doubt, it’s possible to generate strong A/B test hypotheses using only findings from say, user testing results or customer survey responses.

But if you perform qualitative research without a precise objective, then you’re wasting time. Just think about all the effort it takes to create and deploy customer surveys or analyze hundreds of session recordings without knowing what you want to ask or what you’re looking for.

It’s incredibly inefficient, and the occasional stumble on a great insight will be pure luck.

But if you guide your qualitative research using quantitative research, then you’re in the money.

Research loop in A/B testing

Here’s an example:

Imagine your analytics data (quantitative research) reveals a product page with an unusually low add-to-cart rate.

You’ve got the what. Now you need to know why the add-to-cart rate is so low.

Why aren’t people buying? What are their objections? Is there a bug preventing them from adding the item to their cart?

Your objective is to find out why the add-to-cart rate is so low. You can use the High-End Conversion Engine for luxury brands to get an idea of the problem, and then you’ll dive deeper using qualitative research methods to get to the core of the problem. For example, you could run a poll on that product page, asking prospects, “Is there anything holding you back from buying this product?”

Hotjar poll on PDP

You could also check your session recordings for that page to see if there might be a bug preventing people from adding the item to their cart.

This is how both research methods go hand in hand. If you only do quantitative research, you’ll be guessing the causes of the problems you identify. If you only do qualitative research, you’ll be hoping to find a needle in a haystack.

But when you use quantitative AND qualitative insights together, you can pinpoint what’s wrong, learn why, and boost the quality of your A/B test hypotheses. This supercharges both your personal efficiency and your team’s efficiency at increasing conversions.

Using this method, our client and fashion retailer Haute Hijab was able to remove the assumptions from the redesign of their product detail pages. Through approximately 4 rounds of A/B tests, followed by analyses and diligently documenting the learnings, we discovered what needed to be removed, and what needed to be added on the product pages. Through this process we increased site conversions by 26.8%.

To win more consistently, rely on the Testing Trifecta

Assuming, guessing, or following opinions when it comes to launching an A/B test or making a change on a website will all lead to the same results I’ve previously mentioned:

  • Low test quality, generating no meaningful impact
  • Time and money lost trying things that don’t work
  • No insight into what works and what doesn’t

But when you leverage the Testing Trifecta to guide your website optimization program, you’ll combine the best of your qualitative and quantitative insights to generate solid, data-driven hypotheses.

You’ll see a higher success rate from your A/B tests, and subsequently your revenue will rise. All due to using a proper optimization methodology.

Don’t be lazy. Always combine qualitative and quantitative insights — and test continuously to keep learning about your customers and growing your business.


If you want to implement the Testing Trifecta within your ecommerce company, click here to request a free proposal.