But before we dive in, you may be wondering “what is an AB test”. Good question! Put simply - it’s when we split a segment of data (like customers) into two groups, and give them two experiences to see which they respond better to.

So - why do we AB test?

You might initially think to improve something, and I’d say you’re partially correct. There may be many reasons why we test, but the main reason is to reduce uncertainty. For example:

  • Does one piece of content perform better? We’re uncertain about which one performs better.
  • We give people tests/exams to reduce our uncertainty of how capable someone is.

Recently I was consulting on a email campaign, and we were uncertain about whether adding the webinar date in the subject line would help improve registrations. We thought that adding a date would help - but we didn’t know this for sure.

So to test our thinking, we performed a straight forward AB test, comparing the open rates of these two subject line variations:

  • Register now for your webinar
  • Live webinar 4 May: register now

The result? Turns out the email without the date performed better in every metric. This is great learning for our clients and something they can take going forward.

When should you test?

There’s an overhead to testing, I’d say the guidelines for testing would be:

  • When you’re uncertain of something. If you know something to be fact, there’s no need to test it.
  • When the potential payoff is high! This is quite a big one, it’s easy to consider testing when there is no potential payoff from doing so.
  • If the potential payoff is low, testing can still happen, but you should try minimise the cost and time of the test.

Join me next time for some random statistics musings. Perhaps I’ll go into the truly elegant math behind testing…