The A/B Testing Process You Need to Get Better CRO Results

In the U.S. alone, digital ad spending rose more than $75 billion dollars from 2020 to 2022, and similar increases have been seen globally. 

These increases aren’t entirely by advertisers’ choice. You’d probably agree from firsthand experience that soaring ad costs have played a big part. Higher customer acquisition costs are cutting more and more into the revenue and profit of ecommerce brands.

And we haven’t even mentioned the usual challenges brands like yours face, such as conversion rates or average order values that aren’t what you’d like them to be.

A/B testing—“split testing” of two versions of a web page or other asset to see which performs better—is more important than ever. But it has to be done right. We’ll talk about the proper way in a bit, but let’s set the record straight first.

A/B testing as part of a conversion rate optimization (CRO) framework

A/B testing isn’t a way to directly increase your conversions as so many testing tools claim in their marketing. It’s a way to validate your conversion optimization hypotheses and determine whether your strategy for increasing your average order value, reducing cart abandonment, or reaching other goals is the right one.

The solution to lackluster A/B test results isn’t a tactic, which is what A/B testing is on its own. You need a larger framework to help you decide what to test so your efforts don’t go to waste.

The testing trifecta: A data-driven approach to A/B testing for DTC brands

How do you come up with high-quality hypotheses that are worth testing? There are various methodologies, but the tried and true framework we use is called the Testing Trifecta. It routinely helps us find a slew of website issues, areas of opportunity, and A/B testing ideas to increase conversions and profit for our clients. It can do the same for you. 

As we give you the rundown on this framework, notice that the two bottom circles in the graphic above are analytics and human. These represent quantitative and qualitative research, respectively, and form the base that supports testing. You’ll see why both are essential next. 

1. Quantitative research

Step one of the Testing Trifecta is analytics analysis to help you understand what’s working, what’s not, and where the conversion optimization opportunities lie. This might mean digging into Google Analytics data, doing mouse tracking or eye tracking, analyzing heatmaps or scrollmaps, and more. In any case, such quantitative research can reveal all sorts of insights—or issues like these: 

  • Your most-used PPC landing page only converts at 2.4%, and you’re losing money by driving traffic to it. 
  • Certain product pages have an unusual bounce rate. 
  • The login step of your checkout flow has a huge drop-off rate that needs immediate attention.‍

While these are great finds, basing your test hypotheses purely on quantitative research is limiting. After all, it doesn’t tell you why your landing page is converting badly or why the login at checkout is experiencing so much abandonment.

True, a hypothesis based on analytics analysis will still be more useful than one based on nothing, but you’ll still be guessing at what solution to test because you don’t know the cause of the problem.

2. Qualitative research

To get to the root cause and the right solution, you need the human element of the trifecta—qualitative data. In conversion optimization, this data can come from a number  of sources:

  • User testing
  • Analyzing session recordings
  • Website polls and surveys
  • Customer interviews and surveys
  • Analyzing chat logs
  • Retail store visits‍

In other words, step two is doing customer research to understand why your target audience isn’t converting, why they’re dropping off at certain steps, and so on. Not to mention discovering their biggest objections, their behaviors, and what their expectations are as it relates to your brand offerings and ecommerce site. 

Info like this is incredibly valuable, but, as with quantitative data, it shouldn’t be the sole basis for your conversion optimization hypotheses.

Without a doubt, it’s possible to generate strong A/B test hypotheses using only findings from user testing results or customer survey responses. But think about all the effort it takes to create and deploy customer surveys or analyze hundreds of session recordings. 

Simply put, if you perform qualitative research without a precise objective, you’re working harder, not smarter. Any occasional stumbles onto great insights will be pure luck.

3. Roadmap development

Qualitative and quantitative research can each be performed by themselves and get you solid insight into your customers’ demographics, user behavior, desires, fears, and more. But if you want to create experiments that drive business growth, you need to do both types of research and then design research-based hypotheses. What does that look like in practice?

Research loop in A/B testing

‍Imagine your analytics data reveals a product page with an unusually low add-to-cart rate. You’ve got the what, but now your objective is to find out why your add-to-cart rate is so low.

With your goal in mind, you can use the High-End Conversion Engine for luxury brands to get an idea of the problem and then dive deeper using qualitative research methods to get to the core of the issue.

For example, you could run a poll on that product page asking prospects, “Is there anything holding you back from buying this product?”

Hotjar poll on PDP

You could also check your session recordings for that page to see if there might be a bug preventing people from adding the item to their cart.

Let’s say the issue was a bug; your hypothesis would be that squashing that bug would boost your add-to-cart rate. You could then A/B test and validate that premise. 

See how both research methods go hand in hand to help you efficiently pinpoint what’s wrong, why, and the best solutions? This methodology can work just as well for multivariate testing, which is a more complex method of testing different variations against one another.  

The payoff you can expect from A/B testing done right 

To be clear, using an A/B testing framework doesn’t guarantee that every hypothesis you come up with will be spot on. But you’d better believe it increases your chances of identifying effective solutions faster. And don’t forget that winning and losing tests are good, as long as you learn from them. 

If an experiment “works,” meaning it increases conversions, great! Document the results and make note of what you learned. If an A/B test doesn’t work, analyze what went wrong and what did work. Tweak your hypothesis accordingly, relaunch the experiment, and compare the results. Then, rinse and repeat to get closer to your conversion goal! 

Fashion retailer Haute Hijab’s experience is just one of many proofs that this process works. Through four rounds of A/B tests, followed by analyses and diligently documenting learnings, the brand removed assumptions from the redesign of its product detail pages. As a result, site conversions increased by 26.8%.

What’s worth testing? 

To see results from split testing you have to be smart about what you choose to experiment on. Some of the highest-impact elements on your website may include the following:

  • Your website’s theme, navigation, and page layout or structure
  • Your offer, product assortment, bundles, upsells, and cross-sells
  • The content that lives on your site, including your website copy, calls-to-action, and photos, videos, or interactive elements
  • The underlying messaging your website copy and marketing campaigns or materials are based on

But that doesn’t mean you can pick one of the above at random, run a test, and expect performance improvements. Prioritization is essential to avoid spinning your wheels and wasting your resources. For instance, you might prioritize test ideas that fall into either of these categories: 

  1. They address severe issues that impact a large percentage of potential customers
  2. They would be fairly easy to implement but have an outsized positive impact

How to set up A/B tests on Shopify sites

Once you’ve decided what to test first, what next? There are five steps you’ll need to take. 

1. Develop a rock-solid hypothesis

Again, by this point, you’ll have identified a problem area or opportunity worth exploring. Now, you’ll need to put some thought into potential solutions.

For example, say your landing page copy isn’t quite cutting it, and it’s hurting your conversion rate. You’ve done customer interviews to get a better grip on your target customers’ pains, goals, and conversion triggers and to gather voice of customer data. There are a few common threads across the interviews that could make for more effective messaging. 

Your hypothesis would be that updated, research-backed messaging will improve conversions. 

2. Determine how many variations you can test

Continuing with our example above, you’ll need to figure out if testing all of the potential messaging pillars you’ve identified is doable. Sample size, which you can use a sample size calculator like Optimizely’s to figure out, is a key consideration here. 

You need enough eyes on each variation to get reliable test results. But you also don’t want to be forced to drag out a test for months on end until you can reach the necessary sample sizes for each one.

To illustrate, it wouldn’t make much sense for an ecommerce site that only gets about 25,000 unique visitors per month to test four variations of on-site messaging if they needed a sample size of 16,000 per variation. It would take nearly three months to get the 64,000 unique website visitors needed and reach the minimum for each variation. 

Not only would that be inefficient but it may also steal away resources needed for other high-priority tests. In such a case, it would be better to start by testing one carefully chosen variation against your control. This would allow you to reach statistical significance in just over four weeks, which is the typical timeframe for tests.

3. Set up test variations

It’s at this point that many people pick a testing tool (like Optimizely, Convert, or VWO), do the quick setup steps, use the provided visual editors or no-code features to create their variations, and hope for the best. But, especially if you plan to run complex tests and do a lot of it, this can get you into trouble. 

Code generated automatically by visual editors can result in browser incompatibility or bugs that can hurt the performance of one or more variations. This validity threat—called the Instrumentation Effect—means skewed A/B test results (e.g., false positives). 

With the exception of the simplest tests, it’s best to have developers code your variations and test them thoroughly for cross-browser and cross-device compatibility. We never skip this step at SplitBase; it’s been instrumental in making sure experiments launch bug-free and can get us accurate insights as quickly as possible. 

4. Let your test run 

Don’t rush to end a test just because you’ve reached your sample size targets and your A/B testing tool says you’ve got statistically significant results. Most tests should last at least three to four weeks to allow time for regression to stop. In other words, to allow time for large fluctuations after a test is launched to settle down. 

There’s no hard and fast rule on when to end a test, but we recommend waiting until you hit all of the following targets: 

  • At least 100 conversions per variation
  • Required sample size reached
  • Three to four full weeks of testing
  • At least 95% statistical significance 

You can then compare key metrics across the different versions of your page. Just be sure you choose metrics that tie directly to business goals like increasing revenue rather than focusing on less reliable or valuable metrics like click-through rate (CTR). 

To win more consistently, rely on the Testing Trifecta

Split testing is deceptively simple; there are many A/B testing mistakes you can make if you’re not careful. This is why it’s so important to have and stick to a proven A/B testing process (and to enlist professional help if you don’t have the comfort level or resources to execute it properly). 

Mistakes like guessing what test ideas to pursue and not doing quality assurance on tests can cost you big time. Low test quality will mean that your efforts won’t make a meaningful impact and clue you in on what works and what doesn’t. Plus, you’ll waste time and money in the process.  

On the flip side, when you leverage the Testing Trifecta to guide your website optimization program, you combine the best of your qualitative and quantitative insights to generate solid, data-driven hypotheses. You’ll see a higher success rate from your A/B tests, and, subsequently, your revenue will rise.

To chat in more detail about how to implement proper optimization methodology and what results you can expect from doing so, get in touch. Our team will be happy to put together a free proposal for you and answer any questions you have.