Leave a comment
Get the GH Bookmarklet

Ask GH

I've seen this working in marketing like magic (Analyzing how paid search would increase revenue by doing a full tracking from lead generation to qualified lead). I basically saw that Twitter and Facebook never generated any final sales and I cut the advertising budget and focused only on specific linkedin and adwords campaigns that would drive leads that would become qualified (I was using pardot + salesforce + unbounce + spreadsheets)

I'm working more with product now as a UX designer and I'm trying to understand how design changes and features would eventually drive growth and focus on the 20% things that bring 80% of the results. 

The product that I'm designing is similar to autocad or 3dsmax. It's not simple to do A/B testing like in mobile applications.

  • CL

    Christie Lau

    over 3 years ago #

    Hello Daniel, thanks for the question. I think the 'model' would be dependent on which metric you are trying to grow. What I would do is to annotate the time when you make the design change on your analytics, and compare the time before and after the change.

    It is not an A/B test in traditional sense, i.e. you are not comparing two groups of users in the same time frame. However if you are keeping the rest of your product as a constant, you can compare the behaviour of the same user group at different time points, i.e. before and after design change.

  • EF

    Ed Fry

    about 2 years ago #

    Yes - at inbound.org I built a model with simplified correlations (you can read here on my personal blog: http://edfryed.com/blog/quantified-content-strategy-building-a-predictive-growth-model-for-inboundorg)

    The key thing here is to identify your "aha!" moments. For instance, are your users far more likely to take X action/conversion if they've also done Y? Our complete guide to product qualified leads shares how to do this at any size and stage: https://get.hull.io/complete-guide-pqls/chapter5/

    With that, you need to be able to tie your product usage into effective segmentation in your CRM and messaging tools. "New signups who have done X, but not Y", then keep these up-to-date. This also solves for attribution across the your complex funnel tying data together.

    You can see a little more how this works with Hull here: https://www.hull.io/features/

  • RD

    Radhika Dutt

    about 2 years ago #

    Hi Daniel,
    Great question. We are working on a toolkit for vision-driven product development. Rather than answering every feature and design question through A/B testing (which also of course has it's proper time and place in the process), the toolkit starts by defining a clear vision, a step-by-step guide to translating the vision into product strategy, and finally a filter for evaluating product strategies (which helps answer your about above).

    This filter is a 2x2 rubric that helps you evaluate features as good vs. bad vision fit and high vs. low sustainability, i.e. how expensive are they to build or how much revenue do you expect them to bring in. Something that is in the high vision fit and high sustainability quadrant is an obvious "Do it!". What's more tricky is something that's a high vision fit and low sustainability, which we call "Investing in the vision" quadrant. You'd take on these product strategy elements if you can afford it. The counter to this quadrant is the low vision fit and high sustainability, i.e. Vision Debt column. These are elements of your product that you add very carefully because similar to the concept of Technical Debt, if you're taking on Vision Debt, you'll need to plan on servicing that debt over time.

    Here's a link to a blog post that talks about this filter. https://medium.com/radical-product/three-diseases-your-product-can-catch-and-how-you-can-prevent-them-77a9500d5f07
    And here's a link to the Radical Product talk and free toolkit: https://www.radicalproduct.com/

Join over 70,000 growth pros from companies like Uber, Pinterest & Twitter

Get Weekly Top Posts
High five! You’re in.
SHARE
7
7