No results found for your search
Ask GH: CRO Stack Webinar Questions
Have a question from the CRO Stack Webinar? Ask them here! Sean, Peter and Morgan will be happy to answer them.
Here were a few questions from the webinar:
What about drip campaign tools like Klaviyo and Customer.io?
What were the two in app analytics tools suggested? localytics and what was the other one again?
Segment.io looks great. What does the install look like? The tools that you can flip the switch on are very very different installation (i.e. Kiss Metrics vs Google Analytics).
we use Klaviyo and we love it. (installed through segment.io).
we tested so many email drip marketing solutions - customer, intercom, etc - and found that the robust features and advanced reporting far outweighed its competitors. WTS, there are much better drip marketing solutions - like hubspot or pardot - but those prices are significantly higher.
oh, and since klaviyo is early stage, they respond really well/quickly to customer feedback.
I have a question that has been frustrating me for sometime: low sample sizes
I love to test landing pages but I either have to wait forever to get something significant or simply do before/after tests (instead of A/B tests).
I guess there is no way around this but I am wondering what advice you might give to those who want to optimize pages that get low traffic.
I have a couple tricks for that Ben. One is to test a brief version of my landing page messaging in a Google content text ad. The first line being matching the landing page headline, the second line matching a promise statement and the third line a very brief product description. I tend to find that messaging that performs best on a relatively untargeted ad, performs best on the landing page.
The other way I deal with low/no traffic pages is via usertesting.com. I can generally fix key issues on a page before sending any traffic to it.
Of course I recommend that you continue to run the CRO process on the page (quantitive analysis, qualitative analysis, testing) as traffic increases.
Shana also mentions Bayesian testing for low volume pages, but I'm still not sure what the heck that is :)
awesome advice. never thought about that. i assume you then send the traffic to your landing page and examine the conversions?
I've also been meaning to test usertesting.com - i think i have a few credits available.
The larger the difference in conversion rate, the less data you need to reach “statistical significance”. You can also decrease the amount of data required by lowering your significance level (ie, 80% instead of 95%). I would say that if sample size is an issue for you, the “certainty” that a high statistical-significance level can provide you with might not be the best approach for growth. Instead, it is probably better for you to run tests more quickly than it is for you to run tests with more accuracy.
The trade-off here is that by lowering your confidence level, you will get things wrong more often. By only collecting enough data to detect big changes in conversion (ie, 50% lift instead of 1%), you will likely be discarding changes that would haven driven a small lift. Obviously it’s not ideal to be wrong, nor is it ideal to discard a change that drives even the smallest lift, because a win’s a win after all!
So, what is the cost of making a wrong decision, or discarding changes driving smaller lifts? Since your traffic is low, you are probably okay with making more mistakes if it ultimately leads you to making more wins as well. Likewise, while a 1% lift would be huge for a larger company, such a small lift probably wouldn’t do much for smaller companies. What % lift would have a big enough impact for you? 5%, 20%, 50%+?
Tweaking these two parameters will have a big impact on the amount of data you need to collect to reach “significance”. Hope that helps.
Looks like we answered everything in the Webinar. Even if you weren't on the webinar, feel free to post any questions you may have about conversion rate optimization and tools for improving your chances for success.
In the context of the company stages chart, how would you delineate early stage from mid stage and mid stage from late stage?
So the idea is essentially that in each stage there are different priorities around the testing framework that should be addressed, because of various needs of companies in those stages.
Pre-PMF: You don't have enough traffic or even customers to do real optimization, and you don't know what people actually want. In this case you want to focus on qualitative feedback and the must-have score to get to PMF as a first priority. Instrumenting your site around KPIs that may or may not actually drive the business doesn't make sense.
Early Stage—Post PMF or emerging PMF you want to make sure you have everything instrumented properly so that you can understand what is happening and what matters within your product. Qualitative is also still important because you are probably still trying to dial in exactly on what matters most. PMF isn't typically like flipping a switch. I could also argue that Testing is the number two priority here, as it depends on your traffic, but optimizing your growth engine could be more important based on your trajectory.
Mid-Stage: Company is in a solid growth mode, with your top acquisition channel(s) identified. Here it's all about optimizing your growth engine. You likely are well instrumented and have PMF so the most important thing is making the most of your best performing channel. Peter Thiel talks about the importance of finding one channel and making it work really well here: blakemasters.tumblr.com/post/22405055017/peter-thiels-cs183-startup-class-9-notes-essay
Late Stage: Mature company and growth channels. Scalable acquisition occurring and managing larger marketing budgets and paid acquisition. In this case, you are probably testing and have good lifecycle measurement going on, but it's often the case that at this stage, people in the organization assume that the customer and their needs are well known. But the truth is that often that knowledge is not refreshed as often as consumer needs have changed, and therefore there is growth opportunity and more CRO wins to be had by re-examining customer needs to identify new more green field test ideas that can drive wins.
I hope that helps.
Thank you Morgan, this answers my question perfectly. I was seeking clarification because I'm considering including that chart in a blog entry that I'm writing that's very similar to your reply and I wanted to make sure that we're on the same page. I've been consulting for a lot of start-ups that want to go after quick wins / (what they think are growth hacks) because they don't know what stages they're in and don't realize that they aren't ready to focus on growth yet. It seems to be a pretty common issue and I'm addressing it in this blog.
In my experience working with startups who want growth and aren't ready for it is exceptionally challenging and can be dangerous for your reputation as someone who can drive growth.
I made the mistake of taking on a client who wanted to grow but wasn't ready for it, couldn't or wouldn't make the changes they needed to, and simply didn't get it. In the end, I waived $10,000 in fees because I made the mistake of trying to help when they weren't ready. I'd rather have my reputation than the $10k any day.
Thanks for sharing your advice, Morgan. Fortunately I've followed Sean's advice (from the "rare interview") and have avoided retainer situations with start-ups in these stages as well. So far I've only provided one-off consultations to help them determine what stages they're in and what goals they should be focused on so that they can work on moving forward. They really shouldn't be spending funding on a marketer/growth hacker at that point anyway.
Cool, Sean has a ton of great advice to share, but that might be the most important of it all.
Definitely. That interview in general was a real career changer for me.
@Nichole - where can I find the 'rare interview' you speak so highly of?
BTW, interviews with me aren't really rare anymore - but they used to be when we did that interview.
I have a few questions regarding the utm_nooverride=1 parameter. It might be beyond the scope of this thread and I won't be too bummed if it goes unanswered. I'm mostly baffled by it and trying my best to learn more.
As you know, by adding this to your links, you essentially retain the source/medium/etc data from the first click. Normally, GA attributes conversions to last click.
My questions are:
1. for proper use, are we supposed to add this parameter to the first URL link or the last URL link, or both? In other words, to get this nooverride feature to work, which URL(s) do we tag?
2. what is best practices for this feature. That is, are there certain scenarios where this should be used and others where it should not?
3. finally, and most importantly, I am baffled how this feature will affect multi-channel attribution reports in GA. Those reports provide detailed insights into the funnel your customers take. If your links have no override tagging, what affect will this play on the MCA reports? (I'm considering running a test but hoping someone already has.)
Great questions Ben. I'd like to know the answers too. We might have a better chance of getting a discussion around it if you post it as a separate AskGH.
Use the feedback box below if you have a question, comment or general feedback.
Your feedback has been sent.
Sweet! The link has been copied to your clip boardy board!
Flash isn't supported. Please copy the link manually.