Onboarding growth experiments for your worst users

Understanding who your product is best suited for is critical. If you know what your best leads look like and how they behave, you can design an ideal buyer journey for them, making sure that anyone who looks like your best leads stays on the buyer journey that your best leads take.

That said, like all channels for optimization, you eventually hit diminishing returns. The major holes get filled, and your customer journey works 95% as well as it ever well. What’s worse, by only focusing on creating a great experience for leads who look like leads who have historically converted well, you may create a self-fulfilling prophecy. If you’re only a good fit for your current ICP, you may never be a good fit for the ICPs you want to be targeting in the future.

We see this up close at MadKudu. Since predictive lead scoring models leverage your historical customer data to predict the likelihood that future prospects will convert, if you don’t feed it new data such as successful conversions from leads who historically haven’t converted well, then your ideal customer profile will never change.

Product growth from a predictive modeling perspective can be framed as an exercise in feeding new “training data” that the model can later use to adapt and expand the definition of your ICP, your best leads.

If your product is only accessible via the United States because it requires a US bank account, address or phone number for authentication, leads from outside the U.S. will have a low likelihood of converting. If you build new features or expand into new markets but continue to score leads with old data, you may not give new leads a chance to have a great experience.

 

Growth (for low-touch) & business development (for high-touch) are great teams for circumventing this process, and many MadKudu customers leverage these teams to create new training data by actively pursuing leads who haven’t historically converted well, but that the business would like to target. This can be expanding into new verticals, new markets, or launching new products altogether. All three are areas where historical customer data isn’t a great basis for predicting success, because the aim is to create success that can later be operationalized, orchestrated and scaled.

Parallel onboarding for good & bad leads.

Drift recently gave a talk during the Product-Led Summit about a series of onboarding experiments that segmented their best & worst leads, but that pushed to increase conversion among both segments. Highlighting some of their experiments, it is clear that Drift’s aim was almost to prove the model wrong – that is, they wanted to optimize the chances that a low probability lead would convert, which later could help to retrain the definition of a good/bad lead.

Leads: good vs. bad

Good leads are those who are most likely to convert. In sales-driven growth companies, that means that, if engaged by velocity/enterprise sales, a lead will convert. For product-led growth companies with no-touch models, a good lead is one that is likely to achieve is a certain MRR threshold, if activated.

We define good leads this way – instead of, say, the type of leads we want to convert – because we want to create as much velocity & efficiency for our growth-leading teams as possible. If we send a lead to sales that won’t convert no matter how much we want to, we are incurring an operating cost associated to that rep’s time. That counts double for leads that will convert the same amount whether engaged by sales or not, as we are unnecessarily paying out commission.

Product teams focused on lead activation & conversion waste time running experiments with little to no impact if they don’t properly segment between good & bad leads.

Drift’s 5 Growth Experiments segmented by customer fit

Drift used MadKudu to segment their entire onboarding & activation experience so that good leads and bad leads each received the best next action at each step of their buyer journey.

For Drift, onboarding starts before users create an account. Drift Forces the Funnel by identifying website visitors via IP lookup as they arrive on their website, scoring the account associated to the IP, and then personalizing the website experience based on the MadKudu score and segment.

For their best leads, Drift’s messaging is optimized to provide social proof with customer logos & key figures with a core call-to-action to talk with someone. Drift is willing to invest SDR resources in having conversations with high-quality leads because they convert more consistently and at MRR amounts that justify the investment.

For their worst leads – that is, leads that won’t convert if engaged by sales – Drift’s messaging is tailored towards creating an account and “self-activating,” as we’ll see in future experiments.

For major sources of traffic, like their pricing page or the landing page for website visitors who click on the link inside their widget for free users, Drift iterates constantly on how to improve conversion. Some experiments work, like stripping out the noise for low-quality leads to encourage self-activation. Others, such as dropping a chatbot inside the onboarding experience for high-quality leads, don’t get as much traction despite good intentions & hope.

Intentional Onboarding

Sara Pion spent a good amount of time praising the impact of onboarding questions that allow users to self-identify intent. User inputed fields can be very tricky – mainly because users lie – but Drift has found a heavy correlation between successful onboarding and the objective that the user declared they had when signing up.

As users onboard, good & bad users are unknowingly being nudged in two very different directions. Emails, calls to action, and language for good leads are geared towards speaking with a representative. That’s because Drift knows that good users who talk with someone are more likely to convert. Bad users, meanwhile, are encouraged to self-activate. Onboarding emails encourage them to deploy Drift to their website, to use certain features, and generally to do the work themselves. Again, that’s because Drift knows that talking to these users won’t statistically help them be successful – either because they don’t actually need Drift or because they want to use Drift their own way without talking to someone.

Personalize the definition of success for every user

Like most successful SaaS companies, Drift has invested an awful lot of energy making sure that their best leads have the best possible buyer journey; however, unlike most companies, they don’t stop there. They look at how they can optimize the experience for their worst leads as well, recognizing that even a 1% increase in conversion can be the difference between hitting their revenue goals or not given the massive volume of leads they get each month.

Identify, Qualify & Segment website visitors with a personalized website experience.

Your website is the story you choose to tell: to prospects, to candidates, to investors, to journalists & analysts. Everyone who wants to know how you talk about yourself goes to your website. Your website starts off simple: you speak authentically to your Ideal Customer Profile (ICP). You make it easy for them to understand your differentiation, pricing, and how to get in touch with you.

Then you grow and begin to sell to different businesses with different budgets and different needs. Telling a single story to a single user therefore becomes increasingly complicated. Should your core message focus on enterprise or self-serve? Should your CTAs direct to ‘create a free account’ or ‘schedule a demo’? How important is it to make pricing easily accessible vs. documentation for how to get started?

Identifying, qualifying & segmenting your prospects with personalized messaging can be a full-time job for SaaS companies. This MadKudu play, however, can take a lot of the pain out of rapid experimentation.

Force the Funnel: Identify, Qualify, Personalize.

Our goal with Force the Funnel is to provide the optimal website experience for every qualified account. This play is great for SaaS businesses selling both to self-serve & enterprise. It also helps if you’re targeting distinctly different customer segments (e.g: financial services & luxury goods). In order to achieve this play, we’ll need three things:

  • IP lookup: we’ll be using Clearbit Reveal for this.
  • Lead Scoring: we’ll be using MadKudu for this.
  • Website personalization: we’ll be using Intellimize for this.

We’ll also be connecting all of these through Segment as usual. Let’s dive in and see what happens:

Focus on qualified traffic

First and foremost, we are going to focusing our efforts on personalizing our site for qualified traffic. The two reasons behind that are:

  1. We don’t want to measure success based on how personalization affects unqualified traffic.
  2. We don’t want to spend resources trying to help unqualified traffic convert better.

Qualifying traffic has become pretty easy with the advent of IP Lookup APIs – the most popular being Clearbit Reveal. Feed Clearbit an IP address and it returns the visitor’s company. This is enough to score an account. We’ll be scoring with MadKudu, but you can also do it with your homegrown Lamb or Duck lead scoring model. We’ll send MadKudu the domain name provided by Clearbit, which will return three important data points:

  • Customer Fit segment: very good, good, medium, low
  • Predicted Spend: custom based on your specific pricing plans, predicting which
  • Topical Segmentation: custom based on your target segments (e.g for Algolia: ecommerce, media, SaaS).

Now that we’ve identified, qualified & segmented our audience, we’re ready to personalize our site. There are a lot of personalization/experimentation/testing platforms. We’re using Intellimize here because we want Intellimize to do all the heavy-lifting of designing and running experiments. Intellimize uses machine learning to generate, analyze & optimize experiments. They also pull up some pretty interesting insights around how different personas behave.

The Impact: +30% conversion rate

Segment found that removing buttons linking to the pricing page for qualified enterprise accounts, they increased conversion to demo scheduling by 30%. We’re optimizing the upside by focusing on improving the buyer experience for qualified traffic. This dovetails nicely into other Fastlane plays via chatbots, lead capture forms & gated content.

If you’re running A/B tests on your entire traffic, you may be skewing your results & analysis in favor of what unqualified traffic does (see: Segmenting Funnel Analysis by Customer Fit). The key impact here is that we’re segmenting qualified traffic with AI-driven experimentation meant to optimize for the results we want: more demo requests, more signups, more leads captured.

How we use Zapier to score Mailchimp subscribers

There’s no better way to get your story out there than to create engaging content with which your target audience identifies. At MadKudu, we love sharing data-driven insights and learnings from our experience working with Marketing Operations professionals, which has allowed us to take the value we strive to bring our customers every day and make it available to the marketing ops community as a whole.

As interest in our content has grown, it was only natural that we leverage Zapier in order to quickly understand who was signing up and whether we should take the relationship to the next level.

Zapier is a great way for SaaS companies like us to quickly build automated workflows around the tools we already use to make sure our customers have a frictionless relevant journey. We don’t want to push every Mailchimp subscriber to Salesforce, because not only would that create a heap of contacts that aren’t sales-ready, but we may end up inadvertently reaching out to contacts who don’t need MadKudu yet, giving them a negative first impression of us as a potential customer.

Today we are able to see who is signing up for our newsletter that sales should be paying attention to, and let’s see how:

Step 1: Scoring new newsletter subscribers

The first step is to make sure you grab all new subscribers. Zapier makes that super easy with their Mailchimp integration

Next we want to send those new subscribers to MadKudu to be analyzed. While MadKudu customers have a dedicated MadKudu integration, Zapier users who aren’t a MadKudu customer can also leverage Zapier’s native Lead Score app, which is (you guessed it) powered by MadKudu.

Step 2: Filter by Lead Score

We’ve got our MadKudu score already configured so after I feed my new subscriber to MadKudu, I’m going to run a quick filter to make sure we only do something if the Lead Score is “good” or “very good.”

If you’re worried that the bar will filter out potentially interesting leads, consider this a confidence test of your lead score.

Zapier Filtering by Lead Score Quality

Step 3: Take Action, Communicate!

For MailChimp signups that pass our Lead Score filter, we next leverage the SalesForce integration in Zapier to either find the existing contact inside Salesforce (they may already be there) or create a new lead. SalesForce has made this very easy to do with the “Find or Create Lead” action in Zapier.

Once we’ve communicated synced our Mailchimp lead to Salesforce, we use the Slack integration on Zapier to communicate everything we’ve created so far to a dedicated #notif-madkudu channel, which broadcasts all the quality leads coming from all of our lead generation channels.

Directly inside Slack, our team can get actionable insights:

  • The MadKudu score, represented as 3 Stars (normal stars for Good/ twinkling for Very Good)
  • The signals that MadKudu identified in this lead, both positive and negative
  • A link to the lead in Salesforce, for anyone who wants to take action/review

Actionable Lead Scoring applied to your Newsletter

Our goal here isn’t to reach out to newsletter subscribers – we want to build a long-term relationship with them, and we’re happy to keep delivering them quality content until their ready to talk about actionable lead scoring. What we’re able to do is see qualitatively & quantitatively the number of newsletter subscribers we have who are a good fit for MadKudu today.

This helps marketing & sales stay aligned on the same goal. Marketing is measuring newsletter growth with the same metric its using to measure SQL generation.

3 steps to determine the key activation event

Most people by now have heard of the “Product key activation event”. More generally, Facebook’s 7 friends in the first 10 days, Twitter’s 30 followers… get lots of mentions in the Product and Growth communities. Theses examples have helped cement the idea of statistically determining goals for the onboarding of new users. A few weeks ago, somebody from the Reforge network asked how to actually define this goal and I felt compelled to dive deeper into the matter.

I love this topic and while there’s already been some solid answers on Quora by the likes of Uber’s Andrew Chen, AppCues’ Ty Magnin and while I have already written about how this overarching concept a couple weeks ago (here) I wanted to address a few additional/tactical details.

Below are the three steps to identify your product’s “key activation event”.

Step 1: Map your events against the Activation/Engagement/Delight framework

This is done by plotting the impact on conversion of performing and not performing an event in the first 30 days. This is the core of the content we addressed in our previous post.

To simplify, I will call “conversion” the ultimate event you are trying to optimize for. Agreeing on this metric in the first place can be a challenge of itself…

Step 2: Find the “optimal” number of occurrences for each event

For each event, you’ll want to understand what is the required occurrence threshold (aka how many occurrences maximize my chances of success without hitting diminishing returns). This is NOT done with a typical logistic regression even though many people try and believe so. I’ll share a concrete example to show why.

Let’s look at the typical impact on conversion of performing an event Y times (or not) within the first X days:

There are 2 learnings we can extract from this analysis:
– the more the event is performed, the more likely to convert the users are (Eureka right?!)
– the higher the threshold of number of occurrences to perform, the closer the conversion rate of people who didn’t reach it is to the average conversion rate (this is the important part)

We therefore need a better way to correlate occurrences and conversion. This is where the Phi coefficient comes into play to shine!

Below is a quick set of Venn diagrams to illustrate what the Phi coefficient represents:

Using the Phi coefficient, we can find the number of occurrences that maximizes the difference in outcome thus maximizing the correlation strength:

Step 3: Find the event for which “optimal” number of occurrences has the highest correlation strength

Now that we have our ideal number of occurrences within a time frame for each event, we can rank events by their highest correlation strength. This will give us for each time frame considered, the “key activation event”.

Closing Notes:

Because Data Science and Machine Learning are so sexy today, everyone wants to run regression modeling. Regression analyses are simple, interesting and fun. However they lead to suboptimal results as they maximize for likelihood of the outcome rather than correlation strength.

Unfortunately, this is not necessarily a native capability with most analytics solutions but you can easily dump all of your data in redshift and run an analysis to mimic this approach. Alternatively, you can create funnels in Amplitude and feed the data into a spreadsheet to run the required cross-funnel calculations. Finally you can always reach out to us.

Don’t be dogmatic! The results of these analyses are guidelines and it is more important to pick one metric to move otherwise you might spiral down into an analysis-paralysis state.

Analysis << Action
Remember, an analysis only exists to drive action. Ensure that the events you push through the analysis are actionable (don’t run this with “email opened”-type of events). You should always spend at least 10x more time on setting up the execution part of this “key activation event” than on the analysis itself. As a reminder, here are a couple “campaigns” you can derive from your analysis:

  • Create a behavioral onboarding drip (case study)
  • Close more delighted users by promoting your premium features
  • Close more delighted users by sending them winback campaigns after their trial (50% of SaaS conversions happen after the end of the trial)
  • Adapt your sales messaging to properly align with the user’s stage in the lifecycle and truly be helpful

Images:
– MadKudu Grader (2015)
– MadKudu “Happy Path” Analysis Demo Sample

The “Lean Startup” is killing growth experiments

Over the past few years, I’ve seen the “Lean Startup” grow to biblical proportions in Silicon Valley. It has introduced a lot of clever concepts that challenged the old way of doing business. Even Enterprises such as GE, Intuit and Samsung are adopting the “minimum viable product” and “pivoting” methodologies to operate like high-growth startups. However just like any dogma, the “lean startup” when followed with blind faith leads to a form of obscurantism that can wreck havoc.

Understanding “activation energy”

A few weeks ago, I was discussing implementing a growth experiment with Guillaume Cabane, Segment’s VP of Growth. He wanted to be able to pro-actively start a chat with Segment’s website visitors. We were discussing what the MVP for the scope of the experiment should be.

I like to think of growth experiments as chemical reactions, in particular when it comes to the activation energy. The activation energy is commonly used to describe the minimum energy required to start a chemical reaction.

The height of the “potential barrier”, is the minimum amount to get the reaction to its next stable state.

In Growth, the MVP should always be defined to ensure the reactants can hit their next state. This requires some planning which at this stage sounds like the exact opposite of the Lean Startup’s preaching: “ship it, fix it”.

The ol’ and the new way of doing

Before Eric Ries’s best seller, the decades-old formula was to write a business plan, pitch it to investors/stakeholders, allocate resources, build a product, and try as hard as humanly possible to have it work. His new methodology prioritized experimentation over elaborate planning, customer exposure/feedback over intuition, and iterations over traditional “big design up front” development. The benefits of the framework are obvious:
– products are not built in a vacuum but rather exposed to customer feedback early in the development cycle
– time to shipping is low and the business model canvas provides a quick way to summarize hypotheses to be tested

However the fallacy that runs rampant nowadays is that under the pretense of swiftly shipping MVPs, we reduce the scope of experiments to the point where they can no longer reach the “potential barrier”. Experiments fail and growth teams get slowly stripped of resources (this will be the subject for another post).

Segment’s pro-active chat experiment

Guillaume is blessed with working alongside partners who are willing to be the resources ensuring his growth experiments can surpass their potential barrier.

The setup for the pro-active chat is a perfect example of the amount of planning and thinking required before jumping into implementation. At the highest level, the idea was to:
1- enrich the visitor’s IP with firmographic data through Clearbit
2- score the visitor with MadKudu
3- based on the score decide if a pro-active sales chat should be prompted

Seems pretty straightforward, right? As the adage goes “the devil is in the details” and below are a few aspects of the setup that were required to ensure the experiment could be a success:

  • Identify existing customers: the user experience would be terrible is Sales was pro-actively engaging with customers on the website as if they were leads
  • Identify active opportunities: similarly, companies that are actively in touch with Sales should not be candidates for the chat
  • Personalize the chat and make the message relevant enough that responding is truly appealing. This requires some dynamic elements to be passed to the chat

Because of my scientific background I like being convinced rather than persuaded of the value of each piece of the stack. In that spirit, Guillaume and I decided to run a test for a day of shutting down the MadKudu scoring. During that time, any visitor that Clearbit could find information for would be contacted through Drift’s chat.

The result was an utter disaster. The Sales team ran away from the chat as quickly as possible. And for a good cause. About 90% of Segment’s traffic is not qualified for Sales, which means the team was submerged with unqualified chat messages…

This was particularly satisfying since it proved both assumptions that:
1- our scoring was a core component of the activation energy and that an MVP couldn’t fly without it
2- shipping too early – without all the components – would have killed the experiment

This experiment is now one of the top sources of qualified sales opportunities for Segment.

So what’s the alternative?

Moderation is the answer! Leverage the frameworks from the “Lean Startup” model with parsimony. Focus on predicting the activation energy required for your customers to get value from the experiment. Define your MVP based on that activation energy.

Going further, you can work on identifying “catalysts” that reduce the potential barrier for your experiment.

If you have any growth experiment you are thinking of running, please let us know. We’d love to help and share ideas!

Recommended resources:
https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
https://hbr.org/2016/03/the-limits-of-the-lean-startup-method
https://venturebeat.com/2013/10/16/lean-startups-boo/
http://devguild.heavybit.com/demand-generation/?#personalization-at-scale

Images:
http://fakegrimlock.com/2014/04/secret-laws-of-startups-part-1-build-right-thing/
https://www.britannica.com/science/activation-energy
https://www.infoq.com/articles/lean-startup-killed
https://en.wikipedia.org/wiki/Activation_energy