Leave a comment

Jesse Avshalomov, Head of Growth at Teespring, makes the case for 99% significance levels

  • BL

    Brian Lang

    over 4 years ago #

    The author is confusing the meaning of the p-value / interpretation of statistical significance. Saying, “There’s a 5% chance that these results are total bullshit” isn’t valid because the p-value has nothing to do with the alternate hypothesis (ie, truly having an effect), it only relates to the null hypothesis (ie, there truly was no effect). The p-value simply tells you the probability of observing results as or more extreme if there truly is no difference.

    “Being wrong 1 time in 100 is a radically better outcome for your company than being wrong 1 time in 20.” The level of statistical significance should not be subjectively selected, and your goal should not be to “achieve superior clarity”, as the author states. If it was, why not shoot for something higher, say 99.99% statistical significance? The reality is, eventually you will implement a variation you thought drove a positive lift which really didn’t (and maybe even had a detrimental effect). You should gladly accept this as the cost of doing business, as the aim of the game is to have the positive effect from the real wins outweigh the negative effect from the losses.

    The p-value alone is a poor measure of an experiments success because it tells you nothing about the size of the effect, and this is what really matters most (and will provide much more context around results and the implications of your decision).

    • PL

      Peep Laja

      over 4 years ago #

      Correct. P-value does not tell us the probability that B is better than A. Nor is it telling us the probability that we will make a mistake in selecting B over A.

      These are both extraordinarily commons misconceptions, but they are false. Remember the p-value is just the probability of seeing a result or more extreme given that the null hypothesis is true.

  • LJ

    Logan Johnston

    over 4 years ago #

    Nice! Not a lot of writing of this quality in statistical significance, glad to read the human component of stats

  • FA

    Faisal Al-Khalidi

    over 4 years ago #

    This is a great A/B testing article. It builds off Evan Miller's article: http://www.evanmiller.org/how-not-to-run-an-ab-test.html

    Great read for someone trying to understand the importance of statistical significance in A/B testing. Thanks for sharing @marketergraham!

  • JS

    Jay Sekulow

    over 4 years ago #

    Interesting..

Join over 70,000 growth pros from companies like Uber, Pinterest & Twitter

Get Weekly Top Posts
High five! You’re in.
SHARE
27
27