Leave a comment
Get the GH Bookmarklet

AMAs

John Cutler is keenly focused on user experience and evidence-driven product development.  He mixes and matches various methodologies to help teams deliver lasting outcomes for their customers.

John currently works as the Head of Product Research & Education at Amplitude. As a former UX researcher at AppFolio, a product manager at Zendesk, Pendo.io, AdKeeper and RichFX, a startup founder, and a product team coach, John has a perspective that spans individual roles, domains, and products. 

His viral enthusiasm has been heard through speaking/teaching engagements at QCon NYC, Agile-Lean Ireland, UXThailand, UXLondon, Front, Oredev, Mind The Product, Agile 2015, and various ProductCamps (Vancouver, Los Angeles, Raleigh NC) and MeetUps (Santa Barbara, Los Angeles, New York).  John’s talk on Feature Factories was voted one of the Top 10 Product Talks of 2017.

Mixing in some less-than-typical experiences — driving rickshaws in NYC, and touring the US with “five other weird creative people in a van playing music” — John blogs prolifically about collaboration, product development, diversity, UX research, lean startup, and user experience. Some notable posts include The Evolving Product Manager Role, Persona(s) Non Grata, 12 Signs You’re Working in a Feature Factory, and Stop Setting Up Product Roadmaps To Fail. 

Connect with John on Linkedin, Twitter, and Medium 

  • AS

    Alex Sarto

    about 1 month ago #

    Hello John,

    1. what would be your favourite books about product development/product research?

    2. how do you "educate" (or prepare) the audience who is NOT problem aware? Do you have any favourite methods for collecting data/testing them?

    Many thanks,
    Alex

    • JC

      John Cutler

      about 1 month ago #

      Hi Alex

      So many books to mention. Three I always find myself recommending (lately):

      1. How to Measure Anything Workbook: Finding the Value of Intangibles in Business

      2. Creative Selection: Inside Apple's Design Process During the Golden Age of Steve Jobs

      3. Principles of Product Development Flow

      (Google for the authors)

      For your second question, who does "the audience" refer to?

      2 Share
  • JD

    Julia De Abreu

    about 1 month ago #

    hey John! thank you for your time in answering some of our questions. I have three and I hope that you can answer,

    1- You mentioned in your bio that you focus on UX and evidence-driven product development. As Head of Customer education, what KPIs do you usually track? These are the same KPIs that you used to track when started as Product Evangelist?

    2- How do you see your role as customer education fueling Amplitude’s growth engine?

    3- Does the growth team focus on different growth levers (AARRR) or solely on Acquisition?

    • JC

      John Cutler

      about 1 month ago #

      Hi Julian

      1. They are slightly different. We generally are trying to link learning outcomes to business/customer outcomes. Our North Star is what we refer to as “Learning Organizations” who have had interactions with our various education efforts AND have gone on to expand areas of their usage in-product.

      2. We’ve determined that there are a handful of key influencers at each of our accounts. Education helps those individuals meet their professional goals, which link to their company goals. It is really about taking a personal approach. When those influencers are up-skilled, that acts as a seed for long-term customer success.

      3. Different growth levers at the moment.

      Thanks for your questions!

      1 Share
  • PC

    Pedro Clivati

    about 1 month ago #

    Hey, John - thanks for doing this.

    I see the lack of metric-tracking and/or trust in the data as one of the biggest barriers when teams are adopting the growth mindset and methodology. Going deep in that direction:

    1) When do you think it's the best time to start with a Behavioral Data Tool such as Amplitude?

    2) Do companies need to hire someone specifically to manage events, data, and product-kpis, or should this be incorporated into existing areas of the company? If the latter, which area?

    3) You always talk a lot about NorthStar Metric (and I'm personally a big believer in it too) - what's the importance of having one NSM defined and, once there, how do you maintain a company-wide focus centered on that metric?

    Big fan of your educational work, John - congrats and thanks again.

    • JC

      John Cutler

      about 1 month ago #

      1. Right from the start. There are always things happening in your product that you care about. Even basic counts help with situational awareness. The big mistake teams make is that they see very advanced teams and think “we are not ready”. It is a process. Basic even counts lay the groundwork.

      2. Someone specific helps, but at the end of the day the goal is scaling data literacy across the org. Who does this varies by org type … something an analyst, sometimes a data scientist, sometimes a PM, and sometimes a passionate engineer.

      3. I really enjoy North Star Framework due the focus it provides. To maintain it … repeat, repeat, repeat. Everywhere. All the time. And the second it stops making sense … change it.

      Thanks Pedro!

      2 Share
  • GN

    Gustavo Nunes

    about 1 month ago #

    Hey John! Thanks for joining us for this session. I've watched your talk at Product Camp (that was awesome btw) and have two questions regarding the NS guide that you've mentioned:

    1) Amplitude has done an amazing job educating the market about NorthStar Metric, its role, and its importance. Can you share what is your NSM and how was the process to figure it out (who was involved, how frequently do you update it, any other insights, how does it speak to the company’s purpose and values?)

    2) At a company with +400 employees distributed all over the globe, how a Northstar metric helps to align cross-functional teams against a shared mission? Do you work with specific frameworks, like OKR?

    Thanks!

    • JC

      John Cutler

      about 1 month ago #

      1. Our current NSM is “weekly learning users”. It is a function of activating accounts, creating and sharing insights, and the long-tail consumption of those insights. Tanner McGrath was pivotal in this North Star, and doing some of the research. Of course, if our strategy changes we will change it. I think it very much speaks to our focus because analytics are not about querying or activity. It is about LEARNING. That is where we focus.

      2. We don’t use OKRs for now, but we could. With a NSM, OKRs are basically your forecast of moving a particular input. The NSF greatly simplifies OKR creation.

      1 Share
  • AR

    Aymée Reis

    about 1 month ago #

    Hey John! Big fan of Amplitude so here are some of the questions that I want to ask you.

    When it comes to growth hacking we know that there's no silver bullet, but could you please share with us one Mission (Objective) that your team was incumbent with and what were the experiments and tests they ran to achieve their goal (both successful ones and failed ones)?

    Would you say that this was one of your milestones since joining Amplitude? If not, what were they?

    At Amplitude, teams are allowed to fail? haha I mean, it would be awesome to run only successful experiments but we know that life is not that easy and mistakes are important for professional and self development. Do you have a guess of what is the ratio of successful/failed experiments at Amplitude? And whats's the test/learn growth process?

    2 Share
    • JC

      John Cutler

      about 1 month ago #

      I wish Andrea from our Growth team could have joined. She would be perfect for this. For my team we have been trying many different experiments to see if we can improve data quality through education alone. Many efforts have failed. One workshop did seem to move the needle, but now we have to figure out how to scale it.

      For our growth team, I can imagine it is something like the classic 10% big lift, 20% “something seemed to have worked”, and 70% … fail. This is hard!! The process is fairly similar to other approaches. Frame context. State a hypothesis. Devise the minimally viable experiment. Experiment. Review. Communication is essential. This is part art, part applied science.

      1 Share
  • RC

    Rodrigo Cavichioli

    about 1 month ago #

    Hello John! Big fan (and user) of Amplitude.

    - We know Amplitude has a strong data-centered and test-oriented culture - what are the main benefits such style brings to the company routine and what advice would you share with the rest of us?

    - Can you open up a little more abt what's the framework used during your growth meetings? Does it happen on a weekly or biweekly basis? And who participates?

    Tks!

    • JC

      John Cutler

      about 1 month ago #

      We have a saying. “Are you shipping faster than you learn? Or learning faster than you ship? Or is it about equal?” We strive for a balance of the two. The advice? Get the loop going. Even if minimal. People read so many blog posts about companies running a crazy number of experiments. That is for blog posts. In the real world, some companies are lucky to just have one stream of solid experiments and an outcome in a quarter.

      We have a layered set of meetings / rituals / artifacts. One big mistake I see is that companies try to cram everything into one meeting or one canvas. For example, we might do bi-weekly learning reviews, quarterly deep-dives, and dailies. We are a big fan of one-pagers and using the Notebooks feature in Amplitude to keep folks focused on the key message.

      1 Share
  • RF

    Ricardo Françoso

    about 1 month ago #

    Hey John! Thanks for being here today.

    1- In your team who's responsible for coming up with growth ideas? Is it something that you stimulate and invite everyone from the company to participate? Do you believe that there's any downside to adopting a growth culture?

    2- Do you guys use any prioritization framework to select the best ideas (such as PIE, ICE, CXL, etc)? Once a hypothesis has been validated, what's the next step in the flow? Do you share the findings company-wide?

    3- What is the frequency and volume of experiments Amplitude runs on a weekly basis and, do you believe there's a correlation between that number and their success?

    • JC

      John Cutler

      about 1 month ago #

      1. We solicit ideas from everywhere, and our growth team (headed up by Andrea Wang) makes the final choices. The downside I can see is when a team iterates to nowhere, and doesn’t ask the core questions about the levers to growth. Model the problem. That is a key step. Without that, you drive in data circles.

      2. Variety of frameworks. At the end of the day we’re looking for the highest probability of generating learning mixed with level of effort. I would add, however, that LOE misses the point. I love using max batch sizes like 7d. Everything is under 7d. With that, you don’t get caught in micromanaging the team and schedules.

      3. I don’t have that exact number. I would add that experiments are layered. Teams that brag about # of experiments often miss the forest through the trees. Step back to 10,000 ft. What is going on?

      1 Share
  • GH

    Gavin Hope

    about 1 month ago #

    [Continued - I think the last part of my entry was cut off?]

    So, my question. In your experience, when it comes to establishing a North Star, is there a difference between pre-launch and post-launch?

    • JC

      John Cutler

      about 1 month ago #

      Here’s my tip, and it seems crazy simple but it works. You know what you know. A North Star and Inputs are a series of hypotheses. When you pre-launch, you have guesses. Make those certain. Make those clear! That is all you have. The inputs especially should be actionable. Teams can move them. So this naturally limits what you can do.

      When I do the NSF stuff with teams, I always start with “what do we believe?” and “what evidence do we have?” Put it this way … if teams are 100% confident then they are either fooling themselves or they are in a commodity business. So embrace the uncertainty at first.

      2 Share
      • GH

        Gavin Hope

        about 1 month ago #

        Thanks for that reply, and it's nice to hear that you start with "what do we believe?" - as we did that, too. _That_ discussion was super useful. Cheers, Gavin

  • JR

    Jonathan Robertson

    about 1 month ago #

    Hey John, thanks for doing this AMA.

    What advice do you have for product teams looking to improve the trial experience for new users?

    • JC

      John Cutler

      about 1 month ago #

      First, before doing anything, make sure the key steps -- states from the perspective of customers -- are instrumented and measured. Situational awareness is key.

      Next, if possible do qualitative research. It really helps.

      Second, don’t look at it as a math problem. Some actions are far more valuable than others. Focus where it matters. Then when it is time, worry about small optimizations.

      Finally, cohort cohort cohort. You want to create an amazing experience for one cohort of users FIRST, before trying to solve the worlds problems.

      1 Share
  • RJ

    Renan Jark

    about 1 month ago #

    Hi, John! Thank you so much for the opportunity.

    Working with some growth teams, we usually classify them as dedicated, cross-functional, and/or mixed-structure.

    Does Amplitude fit in one of these structures? ? Are there teams solely responsible for running experiments or is this something everyone is responsible for? What would you say are the best advantages of adopting this structure?

    Still on that line, who does your growth team reply to?

    • JC

      John Cutler

      about 1 month ago #

      At the moment, everyone is responsible. And my advice? Keep this the case for as long as possible. When you do start to specialize, make sure that person’s role is to SCALE literacy vs. hoard literacy. Finally, at a certain size start thinking about platforms that helps teams do this autonomously.

      1 Share
  • GH

    Gavin Hope

    about 1 month ago #

    Hi John, thanks for the opportunity to ask some questions.

    I have a small amount of experience with the North Star Framework - it's limited to running two, small workshops to get a feel for the process, do some group learning, and talk about product strategy.

    My question is about working on a product that's pre-launch. In this case, the team in the workshop have a good grasp of the value they think the product will offer. They've talked with customers, done a good number of prototype demos, but they're still a way off a v1.0 that they could release into the wild.

    Their focus is on getting to an MVP because they really want to validate what they think is valuable to the customer. Some members wanted to focus purely on getting to that MVP - "once we get there, the customer feedback will inform what we establish as the North Star".

    It seems to me that they know enough to set a North Star, along with some Inputs that will show they're learning about the value the product can offer, once it's released. Getting to that MVP is one step on the way to the North Star. But I'm new to this and I perhaps wasn't convincing enough :)

    So, my question. In your experience, when it comes to establishing a North Star, is there a difference between pre-launch and post-launch?

  • JS

    Jirapol Songvuti

    about 1 month ago #

    Hi John,

    Could you talk more about the latest substack post: Coherence > Shallow Autonomy? What are some advices to lead an organization with a coherence strategy?

    Many Thanks!!

    • JC

      John Cutler

      about 1 month ago #

      Absolutely. The key to a coherent strategy is conversation, review, tweaking, and review. You need to try to poke holes in it. Teams need to ask “what information do I need to make better decisions?” and then either add that information or admit you don’t know.

      The decision test is a great way to test a strategy. If people don’t have that information -- and/or don’t know it is unknown and needs to be learned -- then you have a problem.

      Finally, always thing about speaking about uncertainty with certainty. Do not do certainty theater.

      1 Share
SHARE
78
78