4 Common A/B Testing Mistakes (And How to Fix Them)

For example, did you know that some testing software can significantly slow down your site? The issue here is false positives: these are results that incorrectly show a difference between pages. You should set a sample size in stone prior to running an A/B test and resist the urge to end your test early (no matter how promising your results look). Problem #3: You’re only focusing on conversions. When working to validate or disprove your hypothesis, don’t just throw out any results that aren’t statistically significant -- use these results to inform your later tests, instead. Try running further tests on your CTA and see if you can hit on one that produces a significant improvement. The Fix: Periodic radical testing. Periodically test radical changes to your page. If you’re seeing weak conversion rates, then it’s probably a sign you should invest time in testing out a radical change rather than incremental changes. What radical testing does allow you to do is determine if a large page rehaul will impact your conversions, but it won’t allow you to pinpoint which individual changes might be driving these results -- so keep that in mind before you get started.

4 Better Ways to Showcase Testimonials for Your Business
7 Incredibly Effective WordPress Lead Generation Plugins
10 Great Marketing Tools for SMBs
AB Testing Mistakes Carl.jpg

When you’re creating content for the web, it’s easy to make assumptions about what you think your audience might respond to — but that’s not necessarily the right mentality.

Enter A/B testing: one of the easiest and most popular forms of conversion rate optimization (CRO) testing known to marketers. And while many businesses have seen the value in using this type of validation to improve their decision making, others have tried it, only to be left with inconclusive results — which is frustrating, to say the least.

The trouble is, small mistakes made during A/B testing can lead to round after round of incremental optimizations that fail to producing meaningful results. To combat that, I’ve outlined some of the most common A/B testing mistakes (as well as their remedies) below. These tips are designed to help you keep your testing plans on track so you can start converting more visitors into customers, so let’s dive in …

4 Common A/B Testing Mistakes (And How to Fix Them)

Problem #1: Your testing tool is faulty.

Popularity is a double-edged sword — it’s true for high schoolers and it’s true for A/B testing software.

The ubiquity of A/B testing has led to a wide range of awesome, low-cost software for users to choose from, but it’s not all of equal quality. Of course, differing tools offer differing functionality, but there can also be some more tricky differences between tools. And if you’re unaware of these differences, your A/B tests may be in trouble before you even get started.

For example, did you know that some testing software can significantly slow down your site? This decrease speed can have a harmful impact on your site’s SEO and overall conversion rates.

In fact, on average, just one second of additional load time will result in an 11% decrease in page views, and 7% decline in conversions. This creates a nightmare scenario where the websites you were hoping to improve through A/B testing are actually hindered by your efforts.

It gets worse: Your selection of A/B testing software can actually impact the results of your tests, too. Entrepreneur and influencer, Neil Patel, found that the A/B software he was using was showing significant differences, but when he implemented the new page he failed to see conversions change. His problem turned out to be a faulty testing tool.

So with all these hidden pitfalls, what can you do to make sure your A/B testing software is working fine?

The Fix: Run an A/A test.

Prior to running an A/B test, you should run an A/A test with your software to ensure it’s working without impacting site speed and performance.

For the uninitiated, an A/A test is very similar to an A/B test. The difference is that in an A/A test both groups of users are shown the exact same page. That’s right, you need to literally test a page against itself. While this may seem silly at first, by running an A/A test you will be able to identify any distortionary effects caused by your testing software.

An A/A test is the one time you want your results to be boring. If you see conversion rates drop as soon as you start testing, then your tool is probably slowing down your site. If you see dramatic differences between the results for the two pages, then your software is likely faulty.

Problem #2: You stop testing at the first significant result.

This is the statistical equivalent to taking your ball and going home. Unfortunately, when it comes to A/B testing, stopping your test as soon as you see a statistical significant result is not just bad sportsmanship, but it also produces completely invalid results.

Many tools encourage this behavior by allowing users to stop a test as soon as statistical significance has been hit. But if you want to drive real improvement to…

COMMENTS

WORDPRESS: 0
DISQUS: 0