Leave a comment
Get the GH Bookmarklet

AMAs

I'm Kyle Rush (@kylerush), Head of Optimization at Optimizely. Ask me anything! As the Head of Optimization at Optimizely I create experiments for all parts of the business that aim to increase revenue, customers, product usage, and more. Previously I worked at Obama 2012 doing much of the same things as the Deputy Director of Frontend Web Development. I've run hundreds of experiments and learned some very interesting things so please, ask me anything :) My personal blog is http://kylerush.net and I also tweet about Optimization https://twitter.com/kylerush

  • MB

    Morgan Brown

    over 2 years ago #

    Hey Kyle, thanks for doing the AMA. Can you talk a little about how companies should think about prioritizing their tests, and then making tests an ongoing part of their strategy?

    I talk to so many people who test a couple of buttons or headlines and then don't know what to do/test next. What's your advice for them?

    • KR

      Kyle Rush

      over 2 years ago #

      Hey Morgan - Great question. This is something I think we all struggle with.

      In terms of prioritization, it's important to be very analytical. At Optimizely, it usually takes us about two weeks to before we make a decision on experiment results. Compared to the Obama campaign (which had more traffic) we have a pretty high opportunity cost of running an experiment so do our best to make sure it's a good experiment. One of the best tools for determining the potential ROI of experiment ideas is to look at past experiments. If you got an increase the first time you changes your headline, but the next three attempts failed, move on from headline tests for a bit. I have found it a lot harder to find the magical small change that produces a big effect. So I would say test your big ideas first if possible. Obviously there is no statistical truth to this, but in my experience, bigger changes have a higher likelihood of reaching significance.

      There are a lot of solutions to making tests an ongoing part of a strategy. I think the single best thing to do is to allocate headcount towards it. If you can swing it, hire a dedicated optimization person. If you can't, get everyone on the team (designers, engineers, product managers) evangelized. Let everyone run an experiment and then hopefully everyone will see how mission critical it is. Lastly, when you have a winner, spend a lot of time analyzing what the effect was on the bottom line of the company. If it's a beneficial effect, trumpet the success to the entire company. Bring it up in meetings, etc. Chances are nobody will be able to point to another change that can easily be tied, like a winning experiment, to the bottom line of the company.

  • PL

    Peep Laja

    over 2 years ago #

    What's your take on running multiple simultaneous tests on a single page? Multiple tests on the same site, but with the same goals, and overlapping audiences (e.g. test on a home page, product, category, cart, checkout all at the same time)?

    When talking to different statistics people, I get different takes.

    Like these two opinions differ:

    http://www.maxymiser.com/resources/blog/running-multiple-tests-simultaneously

    vs

    https://help.optimizely.com/hc/en-us/articles/200064329-Simultaneous-Testing-Running-two-different-tests-at-the-same-time

    • KR

      Kyle Rush

      over 2 years ago #

      Fantastic question. This is something that I would like to see discussed more in the community.

      I think the bottom line is that nobody can answer this question for you. If you want to know if a change in experiment A affects a change in experiment B, while both are running at the same time, the answer seems to be "it depends". It depends on what the changes are. The cleanest way to do it will always be to only run only one test a time. That being said, if you're experienced in optimization, fairly confident that the affects won't converge, and willing to take the risk, go for it.

      That being said, you could run two tests at once, then each of them individually and see if the results match. Sure it will take a lot longer than just running the tests individually, but you might uncover something very valuable in the long term. If you decide to do this, please write about it and let me know!

  • SE

    Sean Ellis

    over 2 years ago #

    Thanks for doing this AMA @kylerush ! What is one of the most successful tests that you’ve run and what were the inputs that led you to want to run that test (or was it just a lucky guess)?

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Sean -

      Thanks for having me!

      There's many ways to define successful, so I'll provide a few examples.

      The most successful test was actually a false positive in disguise. On the Obama campaign's donation form we tested removing dollar signs from the suggested amounts. Oddly enough, the idea was inspired from high-end restaurant menu psychology. High-end restaurants often remove dollar signs from the prices on menus which usually results in you spending more money because you're thinking of the menu in a context less about money. The results from this experiment showed a 40% increase in revenue. As elated as we were, that's an extremely hard number to believe. So we tested it two more times and only one of the three tests were significant. Turns out the visitors we sampled in the first test were somehow heavily biased towards the variation. We were a lot more careful with our sampling after that. It was a hard pill to swallow realizing that we didn't improve revenue by 40%, but an incredibly important and most invaluable lesson to learn that really helped us in the long run. This is the most successful learning I've ever come across.

      The most successful in relation to the size of the change was an experiment, also on the Obama campaign, where we changed a few words in a headline. The original read "Save your payment information for next time". The control "Now, save your payment information." This gave us a 21.4% increase in visitors who saved their payment information. That's a huge win considering visitors with saved payment information donate four times as often and three times as much money! We came about the idea to prepend the headline with "Now, " because it made the ask feel more connected with the previous call-to-action (donate). This was simply the result of a brainstorm.

      The single biggest lift I believe was when we tested giving away a free magnet (the control) or a set of three bumper stickers (the variation) for a donation on the Obama campaign. The set of bumper stickers increased the donation conversion rate by 137%. This idea was also the product of a brainstorm. We had a hypothesis that supporters liked bumper stickers more than magnets because more people can see what's on your car than what's on your fridge.

      • SE

        Sean Ellis

        over 2 years ago #

        Awesome, thanks so much for sharing your lessons learned. Great heads up on the false positives too. That takes a lot of discipline to uncover that truth than just accepting the first results.

  • AL

    Angelo Lirazan

    over 2 years ago #

    Hey @kylerush thanks for doing this AMA!

    More of a personal question, how did you get started in optimization and what keeps you in the business?

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Angelo -

      I started my career as a frontend web developer and for several years did various roles of engineering. I've always loved technical things like this because it challenges me. At some point I started my first experiment with Google Website Optimizer (back in the day!). I was hooked after that first experiment. I found it immensely more challenging than what I was currently doing as an engineer. Optimization is very broad and it requires you to learn a lot about the business, not just one aspect of it, to be successful. In my role at Optimizely I have to understand the enterprise B2B sales flow, user psychology, marketing, engineering, and product design. It's very challenging and that's what I look for in a job.

  • RS

    Ross Simmonds

    over 2 years ago #

    Hey Kyle - What would you recommend to someone just starting in the world of optimization? What should they learn, read or do ASAP?

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Ross -

      I actually would recommend that someone starting out just go for it. Start simple of course, but the point is not to feel like you have to be an expert to start. I have learned a tremendous amount from my own mistakes. There was next to nothing published about optimization when I started (save for one blog post from 37 Signals about an a/b test they ran). Be fearless.

  • DM

    demetrius michael

    over 2 years ago #

    @kylerush - What are the gotchas with Optimizely that you feel everyone should know about? I love the software, but sometimes Optimizely feels like Optimisticly.

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Demetrius,

      I'm sure the product folks would love to hear about what you find to be shortcomings in the product. Contact your rep or reach out to me separately and I'll put you in touch with someone (kyle at optimizely dot com).

      For me, the biggest gotcha has always been statistics. It has never been my specialty and quite frankly, it never will be. I keep myself up to date on best practices and make friends with statisticians when I can. The good news is that today we just released the Optimizely Stats Engine. With our new Stats Engine, you can trust that your results are always valid. No need to calculate sample size in advance. This puts the stats on auto pilot for you. This is a huge improvement. If you want to learn more (and even see the math behind it - we're very transparent about that) go to https://www.optimizely.com/statistics

      I don't really have other gothcas. Obviously you'll learn the tool better the more you use it. This is the case with any tool. If you'd like to be more proactive about things, checkout community.optimizely.com. There are hundreds if not thousands of pages of community discussions and knowledge base documentation. There is a section in the community called "Using Optimizely" that I feel you might find particularly helpful :)

  • DM

    demetrius michael

    over 2 years ago #

    @kylerush - What are your favorite [re]sources to get CRO ideas? (And thanks for doing this AMA, it's truly awesome to have guys like you here)

    • KR

      Kyle Rush

      over 2 years ago #

      I get asked this question a lot. Here are some resources that I would suggest (though you may already be aware of them - I don't really have secrets here):

      1. Optiverse (Optimizely's optimization community): https://twitter.com/morganb/status/557619322156511233

      2. Growth Hackers (obviously)

      3. Twitter search for #cro

      4. Follow companies like Optimizely and HupSpot on Twitter (and read their blogs)

      5. Follow me on Twitter :) @kylerush

  • AH

    Alex Harris

    over 2 years ago #

    Hey Kyle, this may be a loaded (support) question, but I'm new to advance testing on your software and yet to find an answer. Here it is...
    I do CRO for ecommerce and want to roll up many product pages into one test.
    We tried to make URL targeting work to show the variation for multiple product pages.
    Everything we tried only loads one product page on all the pages though.

    I tried support but maybe you can point in the right direction to understand more about my scenario.

    • PL

      Peep Laja

      over 2 years ago #

      Alex I can help you. Email me

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Alex -

      No worries! Hopefully I can help you. It seems like you may be trying to do one of two things:

      1. Create an experiment for multiple pages on your site. If so, Optimizely allows you to do this. Check out knowledge base article on multi-page experiments: https://help.optimizely.com/hc/en-us/articles/200040155-Multi-page-Funnel-experiments

      2. You might be trying to roll-up several similar pages so that the change affects them all. You can also do this, but it often takes an engineer to write a regular expression for you to use in the targeting conditions. If this is the case, I would recommend finding an engineer at your company and asking for help with your experiment.

      I hope that helps!

  • SL

    Samuel Lee

    over 2 years ago #

    Hey Kyle! I'm a completely non-technical growth hacker. Sometimes it gets in the way for growth hacking. How necessary do you think it should be for people like us to be skilled technically? How do you recommend starting?

    Any recommendations on tools to use for non-technical people?

    Thanks!

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Sam -

      I can totally see that getting in the way. I think a lot of the successes that I've had are because I have a technical background.

      How necessary this is will depend on the environment in which you work. If you have dedicated engineering resources then you can be less technical. I would always encourage people to become more technical. Sure, it takes time, but it's time very well spent.

      The best thing, if you have the time, is to enroll in an accelerator course like General Assembly. If you can't do that, then you can do it like I did and teach yourself. I didn't take any courses, I just used Google and engineering books. The nice thing about web dev/engineering is that there are a ton of resources online to learn. And also great Q/A forums like Stack Overflow.

  • PL

    Peep Laja

    over 2 years ago #

    Hey
    Does Optimizely have any plans to add bandit testing? Are there any circumstances where you'd prefer to run a bandit test?

    • KR

      Kyle Rush

      over 2 years ago #

      Hi there -

      We have the feature built, but we're still seeking user feedback (I believe). I've used the Optimizely feature on one of my own experiments. If you'd like to use it, reach out to your Optimizely account rep, or send me an email (kylerrush at gmail dot com - TWO Rs) and we'll see what we can do.

      I tend not to use bandits for my experiments. Out of all the experiments that I've ran, I can only think of one that made a good fit for bandits. Generally speaking, bandits are good if you are very risk averse. I've never been risk averse and never have any of the companies at which I've worked. That being said, if you're release a fundamental change, perhaps it makes sense to do it with auto allocation (aka bandits). For examples, on the Obama campaign we deployed a new version of our donation API. We had the sense to a/b test it against the old version and eventually that showed us that the new version performed nearly 50% worse. Since the effect was so large, bandits could have discovered that earlier and allocated traffic to the original donation API without our intervention. If I could do that experiment over again and the feature existed then, I probably would have used bandits. In other cases, I've found that bandits just slow down the amount of time that it takes to reach significance so this is why I avoid them. Simply put, I'd rather identify a loser faster than take longer to identify a winner.

      • JC

        Jon Correll

        over 2 years ago #

        Kyle is hitting some really great questions today. Also, your specs on your new Stats Engine are awesome. Congrats!

  • AA

    Anuj Adhiya

    over 2 years ago #

    Hey @kylerush - thanks for doing this
    In your experience, have there been times when irrespective of the result of the test you've implemented something else?

    If yes, is there any pattern you've noticed to when you've found yourself doing this?

    • KR

      Kyle Rush

      over 2 years ago #

      Really good question, Anuj. Thanks for posting it.

      The answer is yes. This typically happens when a company is making a long term bet that doesn't necessarily agree with the short term. Let's say the company decides they need to change their positioning to satisfying long term product direction. You might a/b test this new positioning on the homepage and it might lose. This doesn't mean that the new long term product direction is bad. You might have arrived at this result for a number of reasons. Maybe the new positioning is not well defined, maybe it's not presented well. But in this case, it would be unwise to change the entire long term direction of the product based on one a/b test. Long term product direction is usually based on a body of evidence and the a/b test on the homepage is only one data point.

      Another example is actually one that happened to us at Optimizely. In the past we treated the homepage like an ad landing page. That is, we optimized it for conversions and that's all. Well, after investigating some qualitative feedback we realized that it takes a lot more discovery of the product than just "Test it out" for people who don't understand the product, and particularly for high dollar enterprise deals. In essence, the homepage is not just a landing page, it is a door into product discovery. For a long time we removed all marketing content from the homepage and had only the "Test it out" form. This had the best conversion rate, but led to very poor product discovery, which shows in qualitative feedback. We often were told "Why would I try it if I don't know what it is?" This might work for Facebook since everyone knows what that is, but not everyone knows what Optimizely is. Not yet at least :) We have since added some product marketing to the homepage and will continue to iterate. It's a hard thing to decide on and there will always be room for improvement. It's a balancing act, but it's worth noting that "blindly follow experiment results" is not always the best answer. You always want to step back and think critically.

      • AA

        Anuj Adhiya

        over 2 years ago #

        Thanks @kylerush - makes a lot of sense - especially your last statement about stepping back and thinking critically.

        Appreciate you taking the time to respond.
        Cheers!

  • ET

    Everette Taylor

    over 2 years ago #

    Hey Kyle, great to have you.

    What were the key CRO lessons learned while working for Obama for America?

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Everette,

      The pleasure is mine! Great question. Looking back, I'd say the key lessons were:

      1. Start simple to acclimated

      2. Eventually your Optimization roadmap should look like a conversion funnel. Start off by fundamentally changing your conversion funnel (i.e. change the call to action, change the steps in the funnel, etc.) to see what works best. Once you have an answer to that, run experiments to fine tune the winning conversion funnel. Change the big stuff, then change the little stuff.

      3. This is obvious, but the more experiments you run, the more you lean. That is to say, do everything you can to test and keep testing. It's easy for things to get in the way of running experiments. Try your best to clear the roadblocks. I try to always have one experiment running on every key conversion funnel.

      4. Creativity is by far the hardest problem in optimization. It is impossible for one person to come up with all the good ideas. Be humble and seek dissent and ideas from anywhere you can get them. I'm not above asking my mom, who spends 0% of her day on a computer, to look at a landing page and give me feedback. I ask coworkers who don't work on optimization at all for experiment ideas. There are great ideas everywhere, you just have to listen.

      5. It's important to get buy-in from your coworkers to legitimize a testing program internally. For example, designers are often thought to be against experimentation because it supposedly removes creativity from the process. This isn't true! Work with designers to enable them to create their own experiments that answer their creative questions.

  • DL

    Dylan La Com

    over 2 years ago #

    Congrats on Optimizely's new stats engine @kylerush! I'm excited to check it out when it's released.

    My question is: In your opinion, what is the most important factor that leads to success in an optimization process?

    • KR

      Kyle Rush

      over 2 years ago #

      Hey Dylan -

      I suppose the most important factor is determination. Most of the companies I've talked to view optimization as a side project with no clear owner. If that is your view it should come as no surprise when the optimization program fails to produce improvements. Since most companies don't dedicate an owner and allocate resources it can take a lot of determination from a single individual to prove the value.

      If anyone reading this is currently in the situation described above I would say take the bull by the horns. Start off my spending 20% of your week doing optimization. Don't let up. You'll get a lot of losing tests, but hopefully you'll get some winners. Then talk about the winners in your organization and prove that more time needs to be spent on optimization.

      In the end it's up to you :)

  • MB

    Morgan Brown

    over 2 years ago #

    Kyle, I'm asking another couple of questions here, b/c your first answer was so insightful.

    1) What do you do with inconclusive tests? Keep the new variant? Keep the old variant? Come up with another variant?

    2) How important are A/A tests in your opinion? Should they be run regularly?

    • KR

      Kyle Rush

      over 2 years ago #

      Glad you're getting some value out of this!

      1) For inconclusive tests I almost always keep the control in production. The reason for this is pretty simple. Let's say you calculate your sample size so that you can detect a 11% effect in the conversion rate. One you hit that sample size, the result is not significant. The harm in deploying the variation is that you only calculated for uplifts of 11% or higher, but not for negative effects of 10.99% or lower. I would rather reduce my risk of unintentionally lowering the conversion rate by, say, 8% and keep the control in production. Granted it could be the other way around (control performs 8% worse than the variation), but at least the control is what you started with, whereas going with the variant means you made a change that could reduce the conversion rate.

      2) I've really only used A/A test to illustrate the point of sample size. Most a/b testing platforms out there use a traditional frequentist approach of which the results can vary wildly based on sample size (amongst other things). It can be relatively easy to run an a/a test and show that very early on the experiment shows one variation is better than the other even though there is no difference. Depending on how knowledgeable of statistics your organization is, this can be a critical learning. To that end I'd say run as many A/A tests as you need to illustrate this point. The other thing I use A/A tests for is just to monitor the conversion rate. Turns out that there really aren't great tools to monitor conversion rates so I often leave an Optimizely experiment on, as an A/A test, just to monitor the conversion rate. I have run into many people that want to use A/A tests to test the efficacy of their testing platform. I would recommend against that as I've never seen a single example of that highlighting a weakness in the product (Optimizely or others), but if you must, do it once and move on.

  • AE

    Aifuwa Ehigiator

    over 2 years ago #

    Hi Kyle, Thanks for doing this for all of us. I hope my question isn't off base.

    I am beginning a company called Our Street (OurSt.co) that uses community investment to create affordable housing. One of the ways we have been seeing if people are interested in our idea is through our survey, which we know many people hate taking. I started face to face on the street which was challenging and continued to request people take it online. In your experiences with Obama and beyond, would you be able to suggest what has worked best when asking people to take surveys?

  • LK

    Leho Kraav

    over 2 years ago #

    Hi Kyle. How can I subscribe to your blog? No Atom or e-mail subscription available?

  • SK

    Sakari Kyrö

    over 2 years ago #

    A lot of your clients are using Optimizely for Adwords -landing pages.

    Do you guys have an estimate of what percentage of Adwords-traffic is just bots?

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Sakari,

      I don't have an estimate. If you're concerned about this I think you should deploy some methods to measure this and then discuss the results with your Adwords representative.

  • DC

    Dan Cave

    over 2 years ago #

    Kyle: What percentage many of the people using optimzley are users who's only job is CRO? Is that changing? I imagine it will be different between companies of differing sizes. Cheers.

  • LK

    Leho Kraav

    over 2 years ago #

    Thanks for the blog feed Kyle. Much appreciated.

    Q: What do you do with this test result: revenue positive +8.2%, conversion rate negative -1.6%? Same question for reverse situation. There's more scenarios hidden here, such as either metric shown to be stat. significant or not. If you could address the whole matrix, that'd be fantastic. Let's assume sample size is adequate, test has run for 28 days (multiple business cycles) etc.

    • KR

      Kyle Rush

      over 2 years ago #

      Hi Leho -

      This kind of thing can definitely get tricky. I'm missing a lot of context that is fundamental to answering your question correctly, but I'll give it my best shot.

      I've found that it's important to take a look at the big picture. A company might be interested in revenue in the short term, but perhaps first time buyers often turn into repeat buyers. To me, that's the missing data point in your scenario. If the conversion rate is negative enough that the long term repeat purchases would outweigh the positive short term gains in revenue, then you probably want to keep the version that did worse on revenue. However, that may not be true for say, a political campaign that has an end date because there is no long term. In that instance you'd probably go with the variation that did better on revenue since most campaigns don't have a long shelf life. Make sure you're looking at both long and short term goals and you'll be able to choose a winner in any scenario.

  • NL

    Nicolas LE ROUX

    over 2 years ago #

    Hey Kyle, thanks for doing the AMA. Can you tell why how you would launch a website for students with 650 pre-subscribers? Thank you ;)

  • BO

    Ben Owens

    over 2 years ago #

    Kyle, thanks for doing the AMA. As someone who has only done CRO informally (compare results, manual A/B tests, general testing of marketing materials) how do you measure optimization, or approach it, in non-digital realms? Say out-of-home or direct mailers, how would you approach testing? Would you use the same messages digitally first in an A/B test, using the best one?

  • TK

    Teja Kocjancic

    almost 2 years ago #

    Hi Kyle
    What would you recommend doing after running a test with 2 variations (same part of page) and both have very similar results on a long term run?
    Thank you

Join over 70,000 growth pros from companies like Uber, Pinterest & Twitter

Get Weekly Top Posts
High five! You’re in.
SHARE
42
42