No results found for your search
Ask GH: what is the biggest factor preventing you from running more growth tests?
Hubspot's Sidekick team ran over 1000 growth tests in an 8 month period. The rest of us fall way short of this number. What specifically is holding you back from running more tests?
In general, I run into the following challenges:
Resources/Prioritization - Often times, setting up new tests can be cumbersome and time consuming. If the organization isn't used to running a lot of tests even simple Optimizely experiments can take days to setup and even longer to get prioritized.
If I have a backlog of hundreds of ideas none of them are worthwhile if they don't see the light of day. Most marketers/growth people who can't or aren't allowed to make code changes to implement the tests are at the mercy of the product or engineering team who decide which projects get greenlit.
That's why I think it's so important to have a growth team and that the growth team sits within the product organization.
You can have an experimental culture in marketing all you want, but you are going to be limited to ad tests, landing page tests, etc. that may be just the window dressing to tests that can really move the needle.
Ideas - Most people don't have a great idea of what they should test. Without a backlog you really are just pulling at things to test, not actually building a pipeline of ideas to experiment with. Most people get Optimizely or some other testing software and think about testing headlines, without really any notion of what types of experiments they want to, or should, run.
Instrumentation - Unfortunately, a lot of companies are under-invested in metrics and tracking, so it's tough to say with confidence what experiment worked and why. If you don't have end-to-end funnel tracking, or your team doesn't know how to access/use it, it can cripple any experimentation culture because know what knows what was learned. Which leads me to my next point.
Learning - This goes two ways: 1) what did we learn? That's a hard question to answer some times and can lead to inaction. 2) what do we already know? Without a track record or store of previous tests and results it's hard to know what ground has already been covered and where the organization should look next.
Traffic - People confuse optimization/experimentation with A/B testing, so they say they don't have enough traffic to run tests. I think this is more of a mindset issue than a traffic issue. There are hundreds of experiments you can run in low traffic settings. Although having traffic does help specifically in A/B testing piece of the experimentation process.
There are more, but that's a good start :)
@morgan Hit the nail on the head with this one.
The only thing I would argue is ideas. That shouldn't be an issue in the majority of cases.
I'll also be my own devil's advocate since that statistic of 1000 in 8 months came from me.
The pure experiment number is only a piece that tells the story. Certain areas (like paid acquisition) you can run a lot more tests in a much shorter period of time. Biggest hurdle to throughput is typically money.
The deeper you go in the funnel, typically the less experiments you can run in the same time period due to less traffic or more intensive resources per experiment. For example, retention experiments typically take more engineering resources, take longer to collect data, longer to analyze.
So don't just focus on the pure # of experiments you are running, but the impact that you are having (throughput is an input to that).
Thanks for providing more context on the 1000+ tests @bbalfour. I was super impressed with that number (even with part of it being paid acquisition tests). It surprises me that you didn't have trouble coming up with ideas to test. It feels like running 1000 good tests would require a few thousand ideas that get narrowed down to the portion that are test worthy. How did you guys generate your ideas?
@sean Well they weren't all good tests :) I also have a FT team of 7 doing nothing but running experiments. However during the time period I described we went from 1 to 6.
I'm regretting publishing that number now because there is so much context needed. Whoever is reading this, please remember to just focus on impact you are having over everything.
How we come up with ideas is material for a very long blog post. Sounds like people would find it useful because ideas has never been our problem, so maybe we are doing something right there that everyone could benefit from.
At a high level we have set exercises that we've developed that the teams can always fall back on if they need to find ideas. Association, questioning, creative destruction exercises.
We always end an OKR period with more ideas than we started and the ideas get better and better. We do 1 - 2 brainstorm sessions using one of our exercises at the beginning. Depending on the focus area that could generate 50 - 100 starter ideas. We prioritize the top 10ish and start attacking.
Every experiment we run typically ends up generating 2 - 5 more ideas, and invalidating some ideas in our backlog based on what we learned. This comes from asking questions like:
"why did this happen?"
"If this worked, how could we make it even better?"
"Where else can we apply this?"
I REALLY push on those questions.
Theoretically the ideas get better because they were built off learnings.
I'll write about this in more detail at some point.
Thanks @bbalfour -- looking forward to hearing about your ideation/prioritization process.
Great questions @bbalfour ! I'm definitely going to use these.
This is jaw-dropping anwser, Morgan! I liked especially the part of "People confuse optimization/experimentation with A/B testing, so they say they don’t have enough traffic to run tests".
Also, what sort of experiments can be run in low traffic settings?
@ivankreimer Maybe this will help:
When I think experiments I think beyond A/B tests to things like customer development interviews with different value propositions. Also, you can experiment with lots of things that drive growth that aren't related to traffic. For example, A/B testing cold outreach partnership email subject lines, etc. There's lots to experiment with and learn from that doesn't have anything to do with site traffic.
Hope this helps!
+1 on this @morgan - my two biggest would have to be ideas worthy of testing and the resources to do so.
You summed up my life perfectly in this one reply. Amazing!
Great question @sean
Here are the biggest factors that used to prevent us and our customers from running more growth tests:
1) Culture – We notice that until a culture of testing isn’t there in an organization, they’re rarely successful in the long run with growth tests. The primary driver of culture is a management layer that does not balk from putting their, and others’ hypotheses, to test. A great testing culture usually has these tell-tale signs:
a) The testing team gets engineering and design resources
b) Tests that don’t produce lift don’t derail the testing program
c) People in the organization (other than marketing) know about the tests being run and their results
d) Positive results are reported and celebrated company-wide
e) All results are analyzed and insights communicated to everyone
2) Research – Unfortunately, there’s a bit of mea culpa here because the A/B testing tools are partly responsible for promoting the “test your button color” school of thought :) But we’re moving away from that and focusing on getting users to do a lot of research before they jump into testing. Fortunately, research often surfaces problems with landing pages and funnels that don’t require tests.
3) Constraining growth tests to A/B testing – If you don’t have enough traffic, test your assumptions by running them through five of your users. Far better insights than many A/B tests can give.
4) Not measuring results at the bottom of the funnel – All tests should have an impact on the bottom of the funnel. For eCommerce, that means closely watching the RPV of your variations. For SaaS, you’re generally looking at increase in paid subscriptions. Focusing on the wrong metric and not understanding funnel effect is a sure-shot way of running tests that don’t move business needles, and get you reactions like http://imgur.com/gallery/seh6p when you report a winning result. What you really want is this - http://i.imgur.com/Uie55.gif
FUN FACT – we ran a homepage A/B test on VWO.com late last year and it was a bet between two people’s hypotheses. The loser came to office like this - http://imgur.com/H3HyVYg
I personally don't like to run too many concurrent tests. Depending on the metrics you are measuring, tests can have a high "rendering" time.
I can understand not wanting to run too many concurrent tests on the same page, but when you define the scope of testing from external channels to deep product features, I think it opens the door to a lot more concurrent tests. Personally I'm convinced that the biggest factor in increasing growth rates is increasing the number of tests you run in a given period of time. We've been doing a lot of experimenting around this hypothesis lately and it seems to be valid. Of course testing without a clear hypothesis and set of metrics to monitor results is a waste of time. But instrumenting an organization to run more tests the right way is the most predictable way to drive growth IMHO.
Waw I definitly see myself in @morgan 's answer. I'd add that a lot of those struggles are experienced by small businesses and people who are trying to get growth for the first time, especially with small teams bootstrapped companies. I find myself wondering very often what I should do first, even having many moves at hand I can't help wondering which one would have the most effect and ROI in a cost-effective and least time consuming way.
This will sound silly but I am also experiencing what I'd call a bit of hesitation and uncertainty regarding what to implement with this haunting fear of doing the wrong move at the wrong time. I think this is highly correlated to doing things for the first time and a confidence/risk-taking issue: what if this or that test ends up being perceived bad and noticed by an influencial person and prompts a huge negative fallout?
Most importantly, as a bootstrapped business with limited resources, I find resources to be a major handicap in running more tests and figuring out what works best. This joins a lot the problem of A/B testing and limited Trafic which I think are still real problems: afterall how is it possible to come out with conclusions without a statistically-solid sample of tests?
As to the learning part, I love how many amazing services are out there and can be partially well used without paying (or without paying much for them) and without a high difficulty.
Hi @littleoto. Your comment reminded me of an article I read a while ago from Steve Blank https://growthhackers.com/speed-and-tempo-fearless-decision-making-for-startups/ . In the article he talks about the importance of speed and how most decisions are reversible if they turn out to be bad. Highly recommend giving it a read.
Nice reading @sean, thanks for taking the time to find it and share it. I like the dual perspective of seeing new actions. Also currently reading The Four Steps to The Epiphany from S. Blank :) !
Lack of autonomy. From a big company perspective, we often run into a problem where in order to test a particular point in the funnel we need buy-in from other teams (e.g., to test onboarding we need pro services buy-in so we don't step on their toes). This typically isn't a challenge with a smaller, agile, and autonomous startup team.
Lack of time. In addition to what @bbalfour said about the time it takes certain tests to get through a cycle, I often run up against simply not having enough focused time in a week. To really do an experiment justice, you need proper planning, tracking, implementation, etc.
I run on a manager's schedule these days, and I'm often pulled into random projects or things like sales calls, conferences, speaking, writing, and investor stuff. Founders: it might be tempting to pull your growth team onto other, non-growth projects: DON'T DO IT. :)
Hi @sean Not enough data.
In terms of tracking experiments (everything from setting hypotheses to tracking costs to determining roi / success, etc), what system have people found most helpful?
A simple google doc? If so, would anyone be willing to share their template?
And how have you kept your team accountable for actually following through on this?
Thanks all in advance!
P.s. This is easily my favorite Growth Hackers post and I've read a lot of them!
Time. We're over the brim with ideas, but the days tick away faster than we can put them out to the web.
These are my current challenges:
Bandwidth: I work on a small team and user experience/site optimization is only a percentage of my responsibilities. When something breaks or there are projects with more concrete deadlines, testing has to take a back seat.
Validating Ideas: I might have 1000 ideas for testing, but most optimization resources will tell you that tests based on a "hunch" are not worth your time--it's better to have hard quantitative or qualitative data to back up a hypothesis. Our customer support team is very communicative about specific issues users are having on the site, so I like to base tests off this input.
Time vs Effort: a lot of blogs and case studies tout stories of "changing my button color 10x-ed my conversion rate!!1" but 10x zero is still zero. Or to put it less sarcastically, the dollar value of the incremental revenue is never revealed. I'd rather focus on growing my annual revenue or customer lifetime value, which requires a more though-out strategy.
If we had a person or team dedicated to testing and analysis I think my outlook would be different.
For me it comes down to resources and buy in. As @morgan mentioned, asking to borrow resources from the product team creates a conflict where product more often than not is viewed as more important. I think this is where having a growth team becomes important. The buy in side comes in where there may be plenty of ideas but not quite the willingness to test the ideas. A new growth idea may not work but if 1 out of 10 provides a significant boost, then all 10 were worth it. Same with 3 out of the 10 providing small lifts. But full buy in is needed in order to dedicate the resources for these types of experiments.
Use the feedback box below if you have a question, comment or general feedback.
Your feedback has been sent.
Sweet! The link has been copied to your clip boardy board!
Flash isn't supported. Please copy the link manually.