How to Use Data-Driven Split Tests to Boost Ecommerce Sales

Split tests are a powerful way to optimize your ecommerce website, but don’t be fooled by how simple they may look at first glance.

Guesswork won’t cut it when it comes to split tests. You need data and due diligence to get trustworthy results. Developing a proven process is the first step.

We’ll walk you through the basics of creating a split-testing process, how to analyze your test results, and which pitfalls to look out for. And we’ll share seven different ecommerce split tests to start optimizing your customer experience and boosting your bottom line.

Why are split tests essential for ecommerce?

Split tests, also referred to as A/B tests, let you compare two different versions of a website element by splitting the traffic to each variation. This shows you which variation performs better with your target audience based on data.

For example, you may create a variation of a landing page by changing the content in the hero section of the page, or maybe even your offer. You’d then split your website traffic so that half of your visitors see the original landing page—the control version—and half see the landing page with the new CTA, or the variation.

Once your tests reach a statistically significant sample size, you analyze the results to see which version performed better, if any.

Because they’re based on data, split tests leave you with a clear understanding of what’s helping customers convert—and what isn’t.

For example, Brad Shorr, director of content and social media for Straight North, ran an A/B test comparing two pay-per-click ad CTAs. Shorr believed variant A would outperform variant B:

  1. Get $10 off the first purchase. Book online now!
  2. Get an additional $10 off. Book online now.

But even this seasoned marketing expert was surprised when version B doubled the click-through rate of the ad and was a clear winner.

Along with conversion rate optimization split tests, which we’ll cover here, marketers may also use SEO split tests or email subject line split tests. Additionally, multivariate testing, which tests multiple variations at once, is another way to optimize different versions of a web page. 

Benefits of ecommerce split tests

Conversion rate optimization split tests help you optimize your website based on actual customer data, which is generally used to obtain improvements for the key ecommerce metrics: 

  • Increased average order value: Split testing your product pages can help you spot optimizations that encourage customers to spend more for each order they place.
  • Improved customer lifetime value: A more enjoyable shopping experience often leads to repeat customers who spend more time and money with your brand.
  • Reduced customer churn and higher retention: Split tests allow you to find the best way to address pain points that cause customers to abandon your brand.
  • Lower bounce rates: Optimizing your website reduces the chances that customers bounce, or leave after viewing only one web page, and results in more potential customers viewing your products and offerings.
  • Boosted conversion rates: Split testing your landing pages and product pages uncovers improvements that increase the chances visitors convert, whether that’s your overall purchase conversion rate, add-to-cart rate, or another metric. VWO found that the average conversion rate improvement for brands that ran A/B tests was 49%.

These benefits improve overall customer experience, which positively impacts revenue by up to 80%, according to Zippia.

How to define objectives for your split tests

Before diving into your first split test, it’s crucial to define your objectives. You can’t simply pick an element to change, run a test, and expect to see positive results.

Instead, you need to prioritize which changes you test to avoid wasting valuable time and resources. Start by specifying what you want to achieve, whether it’s improving conversion rates, addressing critical website issues that have a significant impact on your customers, or improving your user experience in other ways.

Why you need a data-driven hypothesis

Be cautious of blanket approach best practices for split testing. What works for one business may not work for another.

It’s essential not to jump right in, but instead start with analytics to identify areas of opportunity you may want to split test. These might include pages or elements that you could optimize for the biggest revenue gains. 

At SplitBase, we start with analytics to find split test opportunities. Then, we use human feedback, or qualitative data like surveys and testimonials, to add context to the quantitative analytics data we’ve gathered. This allows us to understand how we should change or improve those pages or elements.

Finally, we complete the last step of our Testing Trifecta by coming up with a hypothesis, or potential solution that’s based on the insights we collected in the previous two steps.

That’s where split tests come in. We test our hypothesis to see what works as well as what doesn’t work, then iterate and start the process all over. We view each split test as an experiment—and even “failed” experiments provide valuable data.

Aim for statistical significance: Sample size and test duration

We recommend running split tests until you reach the following goals:

  • Meet or exceed the required sample size, which you can calculate ahead of time with a sample size calculator like VWO’s A/B Test Statistical Significance Calculator. By comparing the sample size to your average number of unique monthly visitors, you can also estimate how long your test needs to run to achieve statistical significance. 
  • Test for two to four weeks to ensure your data is consistent over time, as abnormal events like holidays, special promotions, and even different days of the week can skew your results.
  • As a general rule of thumb, achieve at least 95% statistical significance—but there are exceptions to this recommendation. Check with your CRO agency for guidance.

Statistical significance is the certainty that your split test results aren’t due to randomness or error. For example, if your test achieves 95% statistical significance, there’s a 95% probability that any differences you observe between each tested variation aren’t random, but instead an actual change in performance. 

However, it’s important to know that statistical significance doesn’t completely erase any potential sources of uncertainty. You also need to take test duration, potential external influences, and sample size into consideration.

Don’t skip QA

Just like a new website, split tests require quality assurance (QA). Don’t simply build tests out and assume everything works correctly based on the preview in your split-testing tool. 

In fact, visual and no-code test editors often result in errors like browser or device incompatibility. This results in an A/B test validity threat and skewed results.

Run the following QA checks before publishing your split tests:

  • Share your test with a sample audience for feedback
  • Check split tests on multiple devices and browsers
  • Walk through your entire sales funnel
  • Click on all important buttons and CTAs

You should continue to QA your split test even after it goes live—at least every few days. Even a small change on your website could break the test and waste your resources. Other risks, like the launch of a new digital marketing campaign or a change in pricing, can skew your results. 

This is why partnering with a professional split-testing and CRO agency like SplitBase is a good idea. An agency comes equipped with specialized tools to conduct QA using multiple scenarios.

5 ecommerce split test examples

Here are a few ecommerce split test examples that demonstrate how this optimization method can achieve real results:

1. Text vs. visual navigation links

SplitBase recently ran an A/B test on a client’s website navigation. We wanted to understand why the visually highlighted “Shop Bundles” button set wasn’t getting a lot of clicks compared to other links in the navigation. 

Our hypothesis was that banner blindness, or when an element visually stands out so much that visitors subconsciously skip over it, was the cause.

With this in mind, we ran an A/B test comparing performance of the original website navigation against a variation using all text links. If your first guess may have been that the visually highlighted links performed better, you’d be wrong (and not alone!). The text link-only variation surprisingly gained 130% more clicks.

2. Using customer quotes as headlines vs. brainstorming copy

If you’ve got a handful of reviews for your products or brand, you may find that a juicy customer quote is the best landing page headline option.

SplitBase took this approach when testing a new landing page for beauty brand Amika, and the customer quote headline was by far a winner. This is likely because the headline came directly from an Amika customer review, meaning the language resonates with customers, and the review adds social proof to the landing page.

3. In-depth product details vs. simple details

Oftentimes, customers want to know more about your products, including how and when to use them.

One example comes to mind: A fashion client’s product base included variations in material, and customers had a lot of questions about the fabrics. They weren’t just interested in how the products looked visually; they also wanted to know whether certain materials were better for different seasons or for playing sports.

Our A/B test introduced a new product page design with a fabric info chart. We tested a number of chart variations before landing on the final version. This new visual element increased conversion rates 26.8% by improving the customer experience and removing barriers to purchase.

4. Always-on free shipping vs. limited-time offers

Do your customers respond better to a free shipping offer if they spend a certain amount or a special free shipping offer? 

Before running this test, dig into your data and survey customers to see if they understand the current requirements to earn free shipping. If they don’t, you may need to test variations of your free shipping CTA to see whether one converts better than the other.

For example, you could test a static “free shipping” threshold against a dynamic one:

“Free shipping on orders over $100” vs. “Spend $16.50 more to earn free shipping.”

5. Bundle builder vs. static bundles 

Bundles are an effective way to personalize your ecommerce site and encourage customers to make additional purchases. While many brands offer static bundles where the included items are preselected, you can also use a bundle builder to further personalize the shopping experience.

Adding a bundle builder to Birthdate.co’s website greatly improved conversions, average order value, and overall return on marketing spend for the personalized gift brand. 

But bundles aren’t always the easy answer to improving conversions. Recently, SplitBase tested removing a bundle builder from a client’s website. Surprisingly, the results showed no changes in any key metrics.

Why? Bundle builders can add unnecessary complexity and give your customers a case of analysis paralysis. They’re also expensive to maintain and include multiple steps in the bundling process; each step adds another chance customers will drop off and not convert.

Lesson learned: Don’t assume a more personalized experience with a bundle builder is the way to go. Instead, test it against static, prebuilt bundles. Additionally, test different bundle variations as well as pricing and other special offers to get solid evidence that supports your final decision.

Accelerate your ecommerce growth with our expert split-testing services

Split tests aren’t a magic bullet. They’re an ongoing process of sifting through data, experimenting, and learning. When you approach split tests strategically and consistently, they’re a powerful way to optimize your ecommerce website, improve customer experience, and drive sustainable growth.

If you don’t have the time or resources to dedicate to split testing, it’s a good idea to team up with an agency like SplitBase. We’ll help you craft data-driven hypotheses, QA and test variations, and then analyze the results to spot optimizations that resonate most with your customers. Enhance your DTC strategy with split tests and request your free SplitBase proposal today.