Testing out different subject lines, layouts, copy, and images is key to your career college’s email marketing success, and today, the most popular testing method is an A/B test. In a traditional A/B test, you split your audience down the middle into two groups. One group gets 'Variation A' and the other group gets 'Variation B'. After a designated period of time or number of sends, results are compared and a statistically significant winner is determined (Haven't been determining statistical significance? There are many free tools online that make the calculation simple: https://www.kissmetrics.com/growth-tools/ab-significance-test/).
But what if you consistently do not reach statistical significance? If your school has a small email list or if your tests don't bring in markedly different results it can be challenging, if not impossible, to determine a winner with confidence. Here are some tips for conducting a meaningful A/B test when you don't have a large audience.
1: Test with clear intention
Make sure your test is based on a hypothesis. Going into a test, you should have a clearly defined problem you are solving for. Stating the problem and an assumption of its cause provides a solid starting point and will help you stay focused through the process. Once you have your hypothesis, you should articulate the way you plan to solve the problem. Know the metric you are trying to lift and know the way this test is trying to lift it.
Here’s one example you might encounter:
- Problem: Inquiries are down
- Hypothesis: Inquiries (and in turn enrollments) are down because the last touch of the lead nurture program is not effectively pushing students to enroll.
- Testing plan: Change the layout of the last touch lead nurture email. Make the copy more scannable and the call-to-action more enticing in order to increase click-through on the “request info” CTA.
In this example, the key metric is click-through and it’s clearly stated that they will try to lift this by changing the layout of the final lead nurture email.
Having clear intention will give you a better understanding of what is having an impact and will help you determine what to test.
2: Focus on key metrics
Focus on lifting metrics that tie directly to your bottom line. In order to get the most out of your testing, look to improve click throughs or conversions, not opens. It's great to increase your open rate but the real goal is to increase conversions. I recommend testing emails that have a historically high click through or conversion rate. This way you can guarantee a solid baseline of engagement which will get you closer to reaching meaningful results. You can then use your results of these high impact tests as a guide for changes you make on other emails that do not get enough engagement to garner a significant test.
3: Test big
Make big changes when you have a small send list. Small tweaks tend to result in small lifts. You should focus your effort on large scale changes, like new layouts, over copy tweaks or color changes. Testing big changes will help you considerably when trying to reach statistical significance.
Tried these tips and still not getting there? Move away from the traditional A/B model. If you don't have a big enough audience to split it, try testing the versions sequentially. Run one version for a set period of time. Then run the other version for the same period of time. This way each version is getting 100% of your list, and you are collecting enough results to achieve statistical significance.
Resources: Photo Credit