Leave a comment
Get the GH Bookmarklet


Oh hey there! I'm Natasha (@natwahid), Marketing Lead at WiderFunnel – a growth agency focused on conversion optimization, customer experience, and personalization. I've been with WiderFunnel for almost three incredible years, and continue to be inspired by the very cool work we get to do with our clients. As a leader on our Growth (Marketing + Bus Dev) team, I spearhead our marketing and lead generation strategies. I'm an experienced content editor, working with our team to craft in-depth and analytical thought leadership pieces, including case studies, blog articles, white papers, video, and original research.

This year, I launched a research initiative which resulted in an original report on "The State of Experimentation Maturity 2018". My team surveyed marketers, product managers, and growth strategists at some of North America's leading brands (Nike, MailChimp, Hotwire.com, and more) to attach data to assumptions about "experimentation maturity."

I was also able to conduct in-depth interviews with many of the respondents, which resulted in a TON of 'a-ha!' moments around experimentation struggles and successes across organizations. I'm thrilled to be here and am happy to answer your questions about content marketing, lead generation strategies, leadership, experimentation and optimization strategy, and creating a culture of experimentation.

I'll be live on June 21 at 930AM PT for one and a half hours to answer your questions!

  • DH

    Dani Hart

    about 2 years ago #

    Hi Natasha,

    Woah, super cool initiative that you launched! I wonder... what's the biggest surprise you encountered in your research?

    Also, if you had to tell someone that's just getting started in experimenting... what are the three things you'd want them to know? Where should they start?

    Lastly, how difficult was it to get participation for your surveys? How did you go about recruiting the marketers, product managers and growth strategists at such big brands?

    Looking forward to learning from your experience!


    • NW

      Natasha Wahid

      about 2 years ago #

      Hi Dani! Thanks so much for your kind words and your questions. I’ll tackle ‘em one by one.

      First – the thing that surprised me the most was actually how many survey respondents were willing to speak to me in-depth about their experimentation programs after all of the data was collected. When we published the report, I reached out to all of the respondents giving them a free copy of the report and inviting them to participate in a follow-up interview. I explained that these interviews would potentially be featured in future content published on the WiderFunnel blog and potentially in external media covering the report.

      I honestly didn’t have huge expectations, but knew it would be hugely beneficial to be able to back up our findings with more qualitative evidence from interviews. To my surprise and delight, almost 10% of the respondents agreed to a follow-up interview. These were in-depth, 60-minute conversations where optimization leaders at some pretty massive companies (and some smaller companies too!) walked me through their pain points, successes, struggles and highlights. Those conversations have influenced follow-up content and resources, and resulted in invaluable insights for me and my team.

      Second – and I love this question – if I were talking to someone just getting started with experimentation, I would advise them to focus on evangelizing the strategy. That means getting buy-in from other departmental leaders and Executives and getting visibility for the program. Making sure those initial tests are 1) conclusive winners or losers that drive learnings (more on that below), 2) visible, and 3) tied to goals that the organization and its stakeholders care about – let’s be honest, “conversion rate” means nothing if your “conversion” isn’t tied to actual revenue impact.

      To my point about ensuring your experiments are conclusive and driving learnings – implementing a solid foundational framework is key. At WiderFunnel, we’ve been refining the process of experimentation for the past 11 years, running thousands of tests with companies across industries. And the process works because it provides a framework that allows optimizers to combine the qualitative, exploratory side of innovation with the quantitative, logical, validating side of innovation.

      The process has steps for gathering evidence from many sources, including customer research, stakeholder interviews, persuasion principles and consumer psychology, digital analytics, a test archive, and more; and it has clear steps for analyzing digital experiences, prioritizing where and what to test first, developing hypotheses that can actually be proven or disproven, and designing your experiments to ensure they provide a conclusive result. We have a ton of resources on the process and the steps, and I’ll leave a couple below:

      - More on the WiderFunnel process for experimentation:

      - More on prioritizing where and what to test:

      - More on creating a provable or disprovable hypothesis:

      - More on proper design of experiments:

      Whether you use the WiderFunnel process, or one of your own making, the key is to put something in place that will act as a guide for your team, that is learnable, repeatable, and reliable.

      I would also encourage anyone just getting started to stick with it. When you first start testing, even if you don’t have a framework in place, you will most likely see some quick wins as you take care of low-hanging fruit. But you will eventually hit plateaus, where your ideas aren’t driving any lift. Keep at it. Experimentation is a way of doing business – it isn’t a 6-month project where you can say you’ve optimized your site. It is continuous because your customers and prospects are evolving, technology is evolving, trends and norms are evolving and there are always, always improvements to be made. Experimentation is what drives the most powerful companies in the world – amazon, Google, Microsoft, Facebook to name a few. These companies never stop testing, and neither should you!

      Third – I’ll be honest, getting the right people to participate in our survey was one of the biggest challenges we faced. Because the topic is already so specific, and we wanted to target Senior Optimizers at large organizations, we went in knowing we were going to have to be gritty. We decided not to incentivize participation with any monetary gifts – which is something we’re considering doing differently next year – but we did offer advance copies of the complete research report to all respondents, which I found to be incentivizing in itself.

      Our team did a lot of personal outreach within the WiderFunnel network and our own personal networks. Personal email outreach and LinkedIn outreach yielded the best results for participation. And because we’re a test-and-learn organization, we were all about continuing to refine our messaging to increase conversion rates (in this case, survey participation).

      Our partners at Optimizely also signed on to help us drive participation in the survey which allowed us to seriously broaden our pool of potential respondents. That’s something I would definitely recommend to anyone planning a research endeavor: You don’t have to go it alone! If you have a complimentary partner, get them involved and figure out how to drive participation from both of your audiences :)

      Hope these answers are helpful! Thanks again, Dani!

      5 Share
      • DH

        Dani Hart

        about 2 years ago #

        Wow, what awesome resources! Thanks for sharing. Looking forward to seeing the report!

  • VG

    Val Geisler

    about 2 years ago #

    Hi Natasha!

    I love talking to people who take research seriously. So I wonder: how do you identify the point where you have "enough" data? Is there ever too much? When do you call a reporting group sufficient?

    Interested to hear your thoughts on data groups.


    • NW

      Natasha Wahid

      about 2 years ago #

      Hey Val! Thanks for your question – it's a really good one.

      I'll start with an analogy. From an optimization perspective, you can A/B test an experience and / or you can conduct in-depth user or customer interviews. Both methods try to understand behavior to maximize conversions, but the former casts a wide and shallow net, while the latter is a narrower, deeper probe.

      There are competing approaches to conducting research like this – none is patently better than the other – and all have pro's and con's. For our purposes, we knew our survey was fairly in-depth. And there will always be a trade-off between response rates and depth. The deeper you go in a survey – asking more / longer questions – the lower your response rate will be.

      Fortunately for me, I have access to some incredible data analysts at WiderFunnel. When I decided to move forward with this initiative, I went to our Senior Data Analyst, Wilfredo, to get his input. The result of that conversation was this:

      Unless you're doing significance testing (or applying some other formal procedure), there are no hard and fast rules for when you've collected enough data. For me, the most important thing was to be very clear within the research report about our methodology, the number of respondents, and the limitations of our research.

      To your second question – no! There's really no such thing as too much data, as long as the quality of the data itself and the data analysis isn't compromised.

      To your third question – when do you call a reporting group sufficient – again, there are no real hard and fast rules.

      In this case, the greatest insights come from the combined “big picture” overview plus the in-depth interviews. Since responses and interviews came from people at companies that are actively using experimentation to guide their operations, we know that they work and represent actual successes. It’s up to the reader (given their context) to determine the extent to which these insights are sufficiently representative for them.

      Hope this response is helpful, and thanks again for the question :)

      3 Share
      • DH

        Dani Hart

        about 2 years ago #

        I see a lot of teams struggle with this and I think you did a good job of breaking down what teams should be focusing on. First step is knowing where you are on the spectrum, then starting there instead of trying become "mature" overnight.

      • VG

        Val Geisler

        about 2 years ago #

        I think something you pointed out at the end there is really significant. "It's up to the reader (given their context) to determine the extent to which these insights are sufficiently representative for them."

        Maybe, at least for me, it's less about worrying if there's "enough" data and more about leading with communication the point you made. Guiding the reader in how they interpret what they read and making sure they know it's a piece of the picture and can never really be "comprehensive".

        I appreciate your answer and time, Natasha!

  • AG

    Ashley Greene

    about 2 years ago #

    Hi Natasha,

    Love the focus on research, especially original research! I'm kinda a research geek. :)

    I can't wait to read the report, but what is the #1 biggest takeaway from the research on the maturity of experiments in 2018? Are we talking strictly A/B testing or big product changes too?

    What was your biggest challenge in collecting the research?

    And lastly (sorry for all the questions), what form of B2B content do you find is converting best right now, both for yourselves and clients?

    • NW

      Natasha Wahid

      about 2 years ago #

      Hi Ashlee! Thanks for your question – I’m also a research geek (it’s that pesky journalism background).

      For me, the most interesting insight from the report came when we found a correlation around resourcing for experimentation. I’ll explain – once we had collected our survey responses, we used that data to develop 5 levels of experimentation maturity: Initiating, Building, Collaborating, Scaling and Driving. An organization falls into a certain level based on their maturity within 5 factors: process, accountability, culture, expertise, and technology. You can read more about all of this in the report itself >> https://www.widerfunnel.com/experimentation-maturity-research-2018/

      Within the expertise category, we found that organizations in the earliest stage of experimentation maturity were highly focused on resourcing for web development and QA. This makes total sense – these skill sets are important to ensure experiment variations can be coded and tested for quality pre-launch, and that winning variations can be hard-coded quickly.

      However, the more mature organizations in our report – at the Scaling level – were equally focused on hiring experts for experimentation strategy. Respondents in this category reported double the average number of team members involved in “Experimentation Strategy” relative to any other maturity level.

      This focus on strategy indicates that mature organizations are hiring experts who have a combination of strong product management basics – the ability to define requirements, collaborate with teams and stakeholders – as well as experimentation and analytical rigor – the ability to interpret test results properly, estimate test durations, and own the science of experimentation.

      This speaks to the fact that experimentation is not easy. Scaling a program requires skill and a strategic mindset!

      To your second question – I believe you’re asking about the meaning of “experimentation” in this context (but correct me if I’m wrong!) IMO an organization that is mature in experimentation views experimentation as its core growth strategy. This means that every team is testing. The marketing team may be running A/B/n tests (and using factorial design, and MVT, depending on the context and circumstances of the experiment) on digital, customer-facing experiences and product owners are testing every aspect of their products. Experimentation is guiding business decisions.

      We spoke to a lot of product managers for this report – many view experimentation as the product strategy. One of the respondents I spoke to who works for a large financial enterprise, said “we really view experimentation as how product management should happen.” And I happen to agree!

      The biggest challenge in collecting our research was getting the right people to participate. I think anyone who creates original research would probably tell you the same thing! Because we were going after a targeted audience, and dealing with a targeted subject, our pool of respondents was limited. As I mentioned above, getting Optimizely on board to support the outreach was hugely helpful. Honestly, it was a determined team effort and, in the end, personal and determined outreach got us the number of respondents we were aiming for!

      To your last question – this research report has far and away been our highest converting piece of content. We are all drawn to research. To benchmarks and trends and data that allows us to compare our organization to other organizations. And, shout out to my design and content team because this piece of content is visually beautiful AND super fleshed out. We didn’t just want to give readers charts. Including context and actionable takeaways was important to us. This report became a very useful piece of content, and people continue to download it. Plus! Original research is something you can usually get the media on board with covering b/c it’s a story in itself. As long as your findings are story-worthy, original research can create buzz around itself :)

      1 Share
      • DH

        Dani Hart

        about 2 years ago #

        A JD and a journalist love research? Hard to believe ;)

        Also, love that personal outreach was part of this. I think people are much more responsive to a human with a passion for the subject then receiving a mass email. Awesome work!

  • SC

    Simon Cho

    about 2 years ago #

    Hi Natasha,

    What are some solutions to some of the barriers that you addressed in your research? More specifically, if there is a case where "no one is really driving this ship", what are some ways to either:
    - drive the ship myself?
    - initiate/expand an experimentation culture in such a way that the ship is being driven?

    In addition, what can WiderFunnel do to help drive this ship?

  • CC

    Collin Crowell

    about 2 years ago #

    Hi Natasha, Thanks for all your time and input. When it comes to building a culture of experimentation, what sort of management setup should an enterprise company take for its international offices? How best to get HQ and local teams to play on the same team?

  • PH

    Pradyut Hande

    about 2 years ago #

    Hey, Natasha!

    Great to have you here!

    Experimentation constitutes the foundation for Growth Marketing. It is easier to kickstart such projects in smaller and more nimble organizations. How do you go about creating an environment/culture of experimentation in larger companies where internal stakeholder buy-in can be a challenge?

    Look forward to hearing your thoughts on the same!

    • NW

      Natasha Wahid

      about 2 years ago #

      Hey Pradyut!

      It’s really great to be here – thanks for your kind words and your question! This is a big question for A LOT of optimizers. Big enough that our awesome Content Creator, Lindsay Kwan, just wrote a massive blog post all about creating a culture of experimentation, leveraging what we learned in our post-research interviews. If you have time, you should dig into the post (https://www.widerfunnel.com/scaling-a-test-and-learn-culture/) cause it’s super in-depth, but here are my high-level thoughts!

      1) Short answer to your first question is “yes”. The biggest issue with website testing (which is *not* the only method of experimentation, but is usually a primary starting point) when you’re a smaller organization is ensuring you have enough website traffic to complete your tests with statistical confidence. But! There are plenty of orgs that have tons of traffic and a small team. And it is often much easier to implement a culture of experimentation in these types of companies because they are often already living and breathing digital and agile practices. Experimentation often folds right in – it makes sense.

      We spoke to Ralph Chocklac, Director of Product at Student Brands – an education technology company – and he explained how experimentation fits right in and makes decision-making much easier:

      “Decision-making becomes extremely easy once your experiments start revealing real user data. You’re no longer sitting in the boardroom making decisions based on gut feeling. The conversation shifts to data and hypotheses, and any idea that comes up is suddenly a candidate for an experiment. There is no HiPPO syndrome, because everyone realizes that it’s actually easier to do your job if you are running experiments and letting the data guide the way.”

      But this isn’t to say that you can’t champion experimentation at an enterprise organization and build a culture that celebrates failure. Plenty of optimizers at larger, more conventional enterprises are doing just that.

      To the specifics of your second question – How do you go about creating an environment/culture of experimentation in larger companies where internal stakeholder buy-in can be a challenge – it helps to ask yourself: What is success for my senior decision-makers?

      Start by finding out how they’re incentivized so you can show how experimentation will help them reach their goals. If you can help them look (and get paid) like rock stars, they’ll support your projects and reward you in return.

      You can also appeal to the rational support they need by building a business case for experimentation. Show the lift that other organizations are getting – there are tons of case studies out there – and estimate the return on investment (ROI) for an experimentation strategy.

      Sometimes, you’ll have a stakeholders / Executives who are really difficult to convince, even when you show them the numbers. In these cases, there’s something to be said for creating internal buzz around experimentation. And building some grassroots excitement. Lindsay described a framework for this in the post I mentioned above:

      Inspire: Of course, you have to make a business case for experimentation, but it’s also important to make an emotional appeal. You want to develop key messages about your program that incite enthusiasm. Storytelling, learnings, and insights can be a powerful motivator for your team to adopt the experimentation mindset. Even if you’re not able to inspire your Executive team directly and immediately, you can work to get your team, your department, and other departments excited about experimentation. Once this starts happening, Execs will take notice.

      Inform: To the point above, make sure your experimentation program is visible within your organization. Spread the word using the channels that make sense within your company culture. Posting experiment stories, learnings, and insights from your tests – on an internal intranet, on a physical corkboard, in a newsletter email, etc. – can lead to deeper engagement with your program.

      Involve: Get people to participate in experimentation in some way! This doesn’t always mean sourcing ideas from other departments (often, teams have too many ideas already). But you can create an environment where everyone has some stake in the results. For example – Lauren from MailChimp gamifies experimentation within a designated Slack channel where people throughout the org can vote on variations, and win prizes if they select the winner.

      Iterate: If you’re testing, you understand the concept of iteration. But we sometimes forget that we can iterate our processes and communications plans themselves. You’ll want to revisit each part of the communications framework (inspiring, informing, involving, and iterating) as your experimentation program matures. It’s always a good idea to evaluate your strategy for evangelizing experimentation to see if your messages and channels are sticky.

      I hope this was helpful, Pradyut!

      • DH

        Dani Hart

        about 2 years ago #

        Super helpful and I love the point about incentives for senior level buy-in... I've personally seen this work pretty well.

      • PH

        Pradyut Hande

        about 2 years ago #

        This is really helpful! It has certainly given me a new perspective on securing buy-in from senior level stakeholders in my endeavour to drive home a culture of experimentation.

        Thanks so much!

  • SE

    Sean Ellis

    about 2 years ago #

    Hey Natasha, thanks for doing this AMA with us. Did your research reveal any insights into what prevents companies from running more experiments?

    • NW

      Natasha Wahid

      about 2 years ago #

      Hey Sean!

      Thanks so much for having me – I’m psyched to be here.

      No major surprise, here, but many organizations we spoke to feel that they can’t experiment as quickly or as frequently as they want to because of resource constraints. Design, development, and data analysis resources are still often shared, split between other priorities, creating bottlenecks. We did find that 76% of large enterprise organizations surveyed have a dedicated optimization team or teams, but these are orgs with 1,000+ employees. For small to medium enterprises that number was only 48% – keeping in mind that those surveyed are orgs that are actively running experiments.

      A big roadblock to both velocity and quality, especially in larger organizations, is a lack of an experimentation protocol and organizational structure to support experimentation.

      For larger enterprises, there was sometimes a sense of “no-one’s really driving this ship”. What I mean is that, even for orgs that have the resources, strategy and leadership of that strategy are sometimes lacking. Often, organizations don’t have a unified protocol or central body that owns experimentation and its overall KPIs – things like experiment velocity or test neutrality rate – metrics that indicate the overall health of the program itself.

      These organizations might be running a lot of experiments, but that doesn’t mean these experiments are working together and pushing the organization forward. Some teams might be testing frequently, but without proper experiment design, or out of line with the rest of the organization. Some teams may be testing infrequently or not at all. Experiments are running, but they aren’t connected and insights aren’t being shared. In this scenario, the same ideas may be getting re-tested, learnings may be difficult to access, silos may form, and the whole program struggles.

      We found that more mature organizations have developed combination organizational models for experimentation. While a decentralized program might enable speed of testing, without proper standards and central oversight, there is always a risk that teams will clash, cannibalize each other’s goals, and pull in different directions rather than pushing the organization in one general direction.

      A central body will most likely have the holistic customer journey in mind, and can prioritize and deliver experimentation campaigns that focus on the most important parts of the business. But it’s unwise to assume that a central body will have the depth of expertise to experiment on every part of the business.

      A combination model attempts to combine the best of both worlds – a central body that owns the protocol, standards, and an insights database and enables individual teams and product owners to run experiments based on their own expertise (and feed those insights back into a central, accessible database).

      Hope this answer is helpful! Would love to hear your thoughts as well, Sean!

  • NW

    Natasha Wahid

    about 2 years ago #

    Hi everybody! My "live" time is up (which is totally insane – that was SO fast). Thank you, thank you for your questions. I will do my very best to answer all of them in the next few days. Hope this has been helpful – would love to keep these conversations going and am more than happy to share resources where needed. I'll see all of you around the community :)

  • JP

    John Phamvan

    about 2 years ago #

    Hi Natasha,

    Thanks for joining us today!

    What does your marketing stack looks like at Widerfunnel? Are there any areas of your marketing "process" that you feel are difficult to navigate?

    Along those lines... what systems are you using to track customer data?

    Looking forward to learning from you!


  • AA

    Anuj Adhiya

    about 2 years ago #

    Hey Natasha - very cool to have you on.

    1. From your research, were you able to see any trends on "easy" things that (most) growth teams are not doing (but really should be)?

    2. What would you say is the most unique thing WF does from a marketing/growth perspective?

  • JB

    Josh Brown

    about 2 years ago #

    Hey Natasha,

    Thanks for taking the time out to do this Q&A.

    For smaller stage companies without a lot of resources, what's the best to approach experimentation to see meaningful success?

    Also, would you be able to share a few of the 'a-ha' moments from the interviews?



  • ES

    Emil Shour

    about 2 years ago #

    Hi Natasha,

    Thanks for taking the time to answer questions here!

    Couple of questions for you:

    1) What lead gen strategies are you seeing becoming a lot less effective in B2B marketing?
    2) On the flip side, what strategy are you seeing becoming more effective in B2B marketing?
    3) Can you tell us a little bit more about the structure of your Growth team?
    4) What has been your most important failure?

  • SK

    Suchindra Kala

    about 2 years ago #

    Hi Natasha,

    Thank you very much for doing this AMA.

    I have one question for you -

    What are the top questions you would ask a company/business that is starting an experimentation program to understand their needs and goals?


  • MH

    Monica Hinch

    about 2 years ago #

    Your company has a lot of high quality blog posts. How do you leverage those to generate business effectively?