If you want to improve site conversion, A/B and Multivariate Testing are invaluable tools. What is more, getting started is not as hard as you think.
I have written before about how websites should not be launched and then abandoned. How we need to be continually optimising and improving them over time to maximise their effectiveness. But how do you work out what you need to do to improve it?
One tool in our arsenal is "split testing", consisting of A/B and multivariate tests. But, despite how valuable these tools are, few websites make good use of them.
Many website owners either feel that their website isn’t big enough to justify this kind of testing, or that it is just too difficult to do. Both of these reactions are wrong as I will explore in this post.
Let’s start by looking at what A/B and multivariate testing are and how they differ from one another.
A/B and Multivariate Testing Defined
Imagine for a moment you have identified a potential problem with your site or an area that needs improving. For example, perhaps you suspect the copy on your newsletter signup call to action is putting people off.
You come up with many different versions of the copy that might help solve the problem. But how do you know which one will work the best? Indeed, how do you even know if any of them will be better than what you already have?
That is where A/B and multivariate testing comes in. You can show different versions to a subset of the users visiting your website. The test then monitors which of your versions performs the best, helping you decide what copy to roll out to everybody.
The only difference between A/B and multivariate is that while A/B focuses on changing one element at a time (such as your newsletter sign up copy), multivariate would alter multiple components (such as the newsletter sign up copy and the subscribe button).
All of this might sound complicated to set up, but in fact, getting started is easy.
Getting Started is Easy
There are many paid tools such as Visual Website Optimiser, that makes this testing straightforward to implement and offers a suite of useful functionality. However, if you are starting out, I recommend using Google Optimise. It is free and simple enough to implement. It is a great way to try out this kind of testing before investing more heavily.
Next, it is time to decide on what you are going to test and how you are going to measure success. For example, in our newsletter sign up test, the measure of success might be clicking on the subscribe button or reaching the thank you page.
Once you have told Google Optimise (or whatever system you have chosen to use) what your measure of success is, you now decide what page you wish to change.
Generally, at this point, it will provide you with some way to directly edit the page in question and create as many different versions of the page as you would like.
Finally, you can specify the percentage of users you wish to send to each variation. It may be that you only want to send a small percentage of users to a test variation in case it performs worse than the current live version. However, the fewer users you show a variation, the longer it will take to get results.
The Problem of Not Enough Traffic
That is a significant weakness of this kind of testing. For us to be sure which version of our newsletter sign up copy is performing the best we will need a statistically significant number of subscribers. That means we will have to wait until enough people have signed up to be confident in the winner.
The length of time you will need to wait will be dependant on the amount of traffic and level of conversion your site sees. A website like Amazon only needs to run a test for a few minutes to gather enough data. But on many websites, it might mean running the test for weeks.
That can make this testing difficult on some sites. However, it is not impossible. You only need to know how to get the most of A/B and multivariate testing.
Test Close to the Point of Conversion
For example, one way of overcoming the problem of low traffic is to focus on testing elements very closely linked with the successful action. For example, changing the text on our newsletter sign up form is intimately connected with the success criteria of pressing the subscribe button. However, testing the impact of a blog post title on newsletter sign up is not as strictly related, and so the conversion rate will be relatively lower.
Focus on Micro-Conversions
Another approach is to focus on micro-conversions. Instead of making your success criteria something that doesn’t happen very often (like newsletter sign up) you could look at a smaller more common action. For example, if you wanted to test those blog post titles, you may be better testing how many users click to view the post, rather than whether they go on to signup.
Limit the Number of Variations
Also, limit the number of variations you use on a low traffic website. The more differences you create, the longer it will take for you to get statistically significant results.
That said, if you have a highly trafficked website, the opposite is true. That is because the more variations you test, the higher the likelihood you will find a version that has a more significant impact on conversion.
Focus on Big Changes
Talking of impact, try and focus on changes that will have a significant effect on conversion. Google famously tested 15 different shades of blue to find out which one performed the best, but you are not Google.
Fiddling around with small changes could have a significant impact on conversion, but it's more likely that testing something big will have a more noticeable effect. So instead, focus your tests on areas of the site that visitors consider essential and thus are more likely to have a significant impact. Be brave!
But once again, if you have a higher trafficked website, this kind of large-scale change is not a good idea. On high trafficked sites the stakes are higher, and it won’t be as necessary to get the level of results you need.
The other downside of these significant changes is that it can be hard to know what element that you changed is responsible for the conversion increase. Was it the change in our newsletter copy or the changes we made to that subscribe button?
If you are forced to take the approach of making significant changes, you can use usability testing to answer these kinds of questions. In fact, whatever your traffic levels, supplementing A/B and multivariate testing with other approaches is always a good idea.
Supplement Split Testing With Other Approaches
Without a doubt, A/B and multivariate testing are powerful tools, but they do have their limitations. Yes, they can show you how to increase conversion, but only through testing various versions. It falls to you to come up with those versions in the first place, and this testing approach doesn’t help with that.
In fact, it is not always immediately apparent why one version wins over another, especially when you are making significant changes. That doesn’t help you to learn and make future improvements.
That is why I like to supplement it with other forms of testing. In my eyes, at least, A/B and multivariate testing comes later in the process. I favour usability testing for identifying problems and testing prototypes because I learn more from the experience.
That said, A/B and multivariate testing should form the backbone of your efforts to encourage users to take action. It is also absolutely invaluable when different stakeholders have different opinions about what will work. With this form of testing, you can quickly try each different approach and see which one performs the best.
With that in mind, I would strongly encourage you to at least give A/B and multivariate testing a go. You have nothing to lose.