No results found for your search
Jesse Avshalomov, Head of Growth at Teespring, makes the case for 99% significance levels
The author is confusing the meaning of the p-value / interpretation of statistical significance. Saying, “There’s a 5% chance that these results are total bullshit” isn’t valid because the p-value has nothing to do with the alternate hypothesis (ie, truly having an effect), it only relates to the null hypothesis (ie, there truly was no effect). The p-value simply tells you the probability of observing results as or more extreme if there truly is no difference.
“Being wrong 1 time in 100 is a radically better outcome for your company than being wrong 1 time in 20.” The level of statistical significance should not be subjectively selected, and your goal should not be to “achieve superior clarity”, as the author states. If it was, why not shoot for something higher, say 99.99% statistical significance? The reality is, eventually you will implement a variation you thought drove a positive lift which really didn’t (and maybe even had a detrimental effect). You should gladly accept this as the cost of doing business, as the aim of the game is to have the positive effect from the real wins outweigh the negative effect from the losses.
The p-value alone is a poor measure of an experiments success because it tells you nothing about the size of the effect, and this is what really matters most (and will provide much more context around results and the implications of your decision).
Correct. P-value does not tell us the probability that B is better than A. Nor is it telling us the probability that we will make a mistake in selecting B over A.
These are both extraordinarily commons misconceptions, but they are false. Remember the p-value is just the probability of seeing a result or more extreme given that the null hypothesis is true.
Nice! Not a lot of writing of this quality in statistical significance, glad to read the human component of stats
This is a great A/B testing article. It builds off Evan Miller's article: http://www.evanmiller.org/how-not-to-run-an-ab-test.html
Great read for someone trying to understand the importance of statistical significance in A/B testing. Thanks for sharing @marketergraham!
Except it's wrong. If you really wanna know about p-values, this is the article http://conversionxl.com/pulling-back-curtain-p-values-learned-love-small-data/
Thanks for the clarification @peeplaja. It seems Jesse is updating the article to reflect this.
Use the feedback box below if you have a question, comment or general feedback.
Your feedback has been sent.
Sweet! The link has been copied to your clip boardy board!
Flash isn't supported. Please copy the link manually.