Leave a comment

Over the last few years the startup community has really gotten behind A/B testing and hyped it up quite a bit. There is a more nuanced point about the downsides of A/B testing that needs to be understood: A/B tests are very very expensive for most startups at the time when they matter most, early in their formation.

  • CC

    Chris Conrey

    over 5 years ago #

    The Agile Dev world has a term for this: "Premature Optimization" - when you refactor and refactor before you put it in front of people. I'm all for optimizing, but you have to do it when it's the right time. Great article.

  • PC

    Patrick Campbell

    over 5 years ago #

    There's a bigger problem here in terms of the validity and significance of tests that the article stresses. The biggest problem for me concerns SaaS companies that utilize A/B testing and think that 1000 visitors is enough to justify the change of a long funnel. There are definitely better ways to fine tune the scope of tests or analyze data beyond tests.

  • CH

    Chris Hexton

    over 5 years ago #

    This is a great article and rings home with me right now. We're at a point where we have the time to really focus on our conversion rate at the top of the funnel but, as we're still tiny in the world of A/B tests, it seems irrational to run a test and wait 5 weeks for a somewhat meaningful result (still questionable depending who you talk to).

    As such we're focusing on qualitative feedback in the first instance to better understand the 'psychology' or potential and new customers.

    Really interested to hear how others have tackled this early on in their SaaS journey.

  • SC

    Shana Carp

    over 5 years ago #

    I honestly think that some of the issues would be solved by switching to a bayesian bandit style a/b test and lengthening the bounds would make running the test a lot cheaper.

    See the d3 graphic in this post: http://camdp.com/blogs/multi-armed-bandits

    At some point you stop testing the worst one and move onto the better ones. It helps

    • MB

      Morgan Brown

      over 5 years ago #

      There was a pretty good debate about last year, after this post was written: 20 Lines of Code that Will Beat A/B Testing Every Time

      http://stevehanov.ca/blog/index.php?id=132

      It is an interesting approach, and valid, especially when you get big variations in the performance of the variants.

      The counter from the A/B testing fans is that you still deal w/statistical significance in multi-armed bandit. So if you're exploiting the better variation, you don't actually have good statistical assurance that the loser is actually worse. And because the bandit test intentionally drives more traffic to the winning page (duh) it takes a lot longer to rule out the loser.

      So this implementation works best when either 1) there is a wide discrepancy between the two variants, or 2) you're going to run many variations and want to sustain average conversion rate over all tests, but not so much when there is little variance between the variants or when you want to get to clear winners quickly.

      That was my takeaway from it, anyway. I'm not a math major though, so interested to learn where I should be thinking about it differently.

  • PC

    Patrick Campbell

    over 5 years ago #

    @ShanaCorp: Ding. Ding. Ding.

  • SE

    Sean Ellis

    over 5 years ago #

    One trick I use for cheaper and faster A/B tests in an early stage startup is to do my early messaging testing in text ads rather than landing pages. If using Google ads, I recommend using content targeting for this rather than search because intent is too strong to get a good read from search. I've also found Facebook text ads useful for this. When I later test the messaging on landing pages, the text ads were very accurate in predicting which landing page would perform best in driving clicks and reducing bounces (not necessarily bottom of funnel conversions).

  • PL

    Peep Laja

    over 5 years ago #

    Even if you don't have enough traffic to do proper a/b testing, that doesn't mean you can't optimize. Here's a guide on how to do it: http://conversionxl.com/how-to-do-conversion-optimization-with-very-little-traffic/

  • HV

    hristo vassilev

    over 5 years ago #

    A friend of mine is working on multi-armed bandit problems and shared few PDFs with me, thought you guys might find them useful: http://bit.ly/1gO273S

    • SC

      Shana Carp

      over 5 years ago #

      actually, I know of a company with an alpha for a multi-armed bandit tester. They're doing something even more interesting - releasing an api to allow entire site peices to be reordered throughout the funnel through a multi armed bandit + targeting.

      If you want to test, let me know

  • SL

    SHARDAY L.

    about 3 years ago #

    We thought the same thing. Plus, we had low MAU (we were testing a third party app for a client in a focus group setting).

    We decided that perhaps A/B testing wasn't so much about the large number of MAU, but the proper funnels you put your testers through. Because I work for a digital marketing agency, under an umbrella brand of a larger, Market Research firm, we were able to apply both quantitative and qualitative methodology WITH A/B testing for an ultimate approach, similar to LEAN.

    Market research ultimately determined that A/B testing is easily a compliment, and if you shop for the right tool, it should be fairly inexpensive.

    Side note: I've noticed that more than one brand can be applied to several a/b testing platforms - perhaps if you're a start-up - partner with a market research firm that has the capacity to provide A/B testing, without you purchasing the platform. You purchase then the research.

Join over 70,000 growth pros from companies like Uber, Pinterest & Twitter

Get Weekly Top Posts
High five! You’re in.
SHARE
30
30