Leave a comment

The importance of significance of the results of an A/B test on a website is well understood. But in email marketing that's not always the case. A/B testing functionality in email tools is often limited in functionality which causes your tests to not run correctly. In this article, Mailchimp looks at the impact of wait time to conclude a winner in three areas: - Open rates - Click rates - Revenue

  • MS

    Martijn Scheijbeler

    3 months ago #

    It's noteworthy and sad that the whole article doesn't talk about significance of the data itself. In this case it's something you absolutely want to take into account when deciding what work better/worse. In this case the article is only talking about after how much time you should be looking at the results based on all their clients. Unfortunately it doesn't take into account that probably most of their clients have a smaller mailing lists and so on their results will be biased and likely untrue if you would follow this.

    • DM

      Dennis Moons

      3 months ago #

      Thanks for pointing this out.

      I think this type of A/B testing is one of the most common ones people get into contact with.

      So it's kind of telling at how much is hidden of this is overlooked in most email tools: to simplify the testing: sample sizes, significance, etc.

      This gives people 'winners' though, which might be the thing they are after.

      I'm using Convertkit for example which allows for A/B testing the subject line. It will take two 15% segments of your list and test which subject line works best. After 4 hours it picks a winner and sends it to the rest of the list.

      • MS

        Martijn Scheijbeler

        3 months ago #

        "This gives people 'winners' though, which might be the thing they are after.". This is a statement I tend to disagree with, it gives them direction with uncertainty still as you usually can't trust the results. In most cases you likely won't reach any significance on the results within a few hours, definitely not for the full sample sizes which basically makes the results that you gather invalid. I'm not saying that anything like this is going to be ground breaking for what you're doing. But if every business is approaching it this way at scale they're still flipping coins instead of making decisions that are the truth.

      • DM

        Dennis Moons

        3 months ago #

        Completely agree Martijn.

        I was trying to say the that those "winners" give them a false sense of success and business direction.

        It's like you said, which isn't much better than flipping a coin...

  • DM

    Dennis Moons

    3 months ago #

    Takeaway for me was the long lead time to draw any conclusions about the impact an email on revenue. the article suggest about 12 hours to reach 80% accuracy.

    So ideally you need wait even longer!

Join over 70,000 growth pros from companies like Uber, Pinterest & Twitter

Get Weekly Top Posts
High five! You’re in.
SHARE
19
19