The New Year is here and I’m sure you have a little more room in the ‘resolutions’ department. Here are three optimization bad habits that you should break in 2015.
1. Stop Calling Tests Too Early
Calling tests too early is the testing equivalent to eating junk food. We all know we should stop doing this, but it is just so darn satisfying. When you see your winning variation absolutely crushing the control variation, I know it is very exciting. However, the start of a test period is a very volatile time, the numbers will level out, and sometimes not to your liking.
Here’s a screenshot of how volatile a test is during it’s first few days. Look at how much changes over time!
Not only will volatility impact your A/B test in the early stages, but the drastic changes between the variations will make it look like the test is ready for you to make live.
Here’s another example, can you tell me what’s wrong with this test?
The sample size is way too small. There are only 655 total visits across and 5 conversions across four variations. Obviously, this isn’t something we should roll out, but the testing tech sure thinks it’s worth making live*.
How can you avoid this? All you need to do is create a test schedule and stick to it.
2. Stop Testing Unworthy Pages
Testing is a great thing; it validates all of our hard work with cold hard data. However, not all pages are worth testing. That’s right, there are times when a test is just not the appropriate approach, and here are a few times when you just shouldn’t be testing:
When you don't have enough traffic
This is where people mostly get this wrong. Testing on low traffic pages just isn’t practical. Running a test for months on end is a waste of testing opportunities and your resources.
If you have a low traffic page you can’t test, but you can still optimize. You’ll need to rely on qualitative data, heuristic analysis, and maybe some inspiration from competitors to optimize your site. From there you can use sequential testing to verify what’s working. Remember – sequential testing doesn’t take time into account. Segment your data to get a look at your data sets – averages can lie.
When traffic is volatile
Media campaigns will increase your page traffic, but your results may not be scalable. An influx in untested traffic, or traffic that isn’t a representation of your standard visitor doesn’t provide data that is useful outside of that campaign.
Similarly, holiday traffic is another volatile period. Your visitor’s intentions and needs are drastically different than during the regular season.
If you are running tests during holidays or on media campaigns, these will provide short-term campaign learnings. The winning variations should be tested before you roll them out for all of your traffic when things normalize.
Overly targeted segments
Overly targeted segments will lower the amount of traffic in the test set and is not a representation of all visitors.
3. Stop A/B Testing Too Many Elements At Once
Keep your A/B tests simple! The more you change on a single variation the higher the likelihood you’ll learn nothing from your test. If you change both a headline, image, and offer on your B variation and it won, you’ll have no idea why!
A/B tests are great for testing individual elements or for testing major radical redesigns as a proof of concept. However, changing multiple elements on the same template is not a redesign, it is a bad test.
If you want to test more than one element, add a new variation – don’t muddle your test variation. A/B/C/…/N tests are a great way to look at how individual elements perform on a page, but don’t add any insight into how the elements work together.
If you have the traffic, you can start testing more than one element at once with multivariate (MVT) testing. Let me reiterate, these tests require a lot of traffic and each additional element exponentially increases the traffic requirement. If an A/B test was a local train station then an MVT test is Grand Central Station.
MVT tests allow you to finally test multiple elements, and see both which elements worked and how the elements contributed to the overall conversion rate. An MVT test is not an A/B/C/…/N test! Don’t confuse the two!
On top of reducing elements in your A/B tests, we really need to stop picking unnecessary test items. Small-scale changes can see incremental gains, but these changes may not do anything to better the user experience. Larger scale changes aimed at simplifying your processes will work wonders on conversion rates.
So those are the three top bad CRO habits to change in 2015. If you got through this post and said ‘Wow, I don’t do any of these things’ I urge you to check out my post on the top conversion killers – I’m sure there is something there that will inspire some change in 2015.
*It’s important to note that this is a symptom of the stats being in a vacuum. This happens on all testing techs, not just the one in the screenshot. It’s your job to apply these numbers to avoid making costly mistakes