Onboarding growth experiments for your worst users

Understanding who your product is best suited for is critical. If you know what your best leads look like and how they behave, you can design an ideal buyer journey for them, making sure that anyone who looks like your best leads stays on the buyer journey that your best leads take.

That said, like all channels for optimization, you eventually hit diminishing returns. The major holes get filled, and your customer journey works 95% as well as it ever well. What’s worse, by only focusing on creating a great experience for leads who look like leads who have historically converted well, you may create a self-fulfilling prophecy. If you’re only a good fit for your current ICP, you may never be a good fit for the ICPs you want to be targeting in the future.

We see this up close at MadKudu. Since predictive lead scoring models leverage your historical customer data to predict the likelihood that future prospects will convert, if you don’t feed it new data such as successful conversions from leads who historically haven’t converted well, then your ideal customer profile will never change.

Product growth from a predictive modeling perspective can be framed as an exercise in feeding new “training data” that the model can later use to adapt and expand the definition of your ICP, your best leads.

If your product is only accessible via the United States because it requires a US bank account, address or phone number for authentication, leads from outside the U.S. will have a low likelihood of converting. If you build new features or expand into new markets but continue to score leads with old data, you may not give new leads a chance to have a great experience.

 

Growth (for low-touch) & business development (for high-touch) are great teams for circumventing this process, and many MadKudu customers leverage these teams to create new training data by actively pursuing leads who haven’t historically converted well, but that the business would like to target. This can be expanding into new verticals, new markets, or launching new products altogether. All three are areas where historical customer data isn’t a great basis for predicting success, because the aim is to create success that can later be operationalized, orchestrated and scaled.

Parallel onboarding for good & bad leads.

Drift recently gave a talk during the Product-Led Summit about a series of onboarding experiments that segmented their best & worst leads, but that pushed to increase conversion among both segments. Highlighting some of their experiments, it is clear that Drift’s aim was almost to prove the model wrong – that is, they wanted to optimize the chances that a low probability lead would convert, which later could help to retrain the definition of a good/bad lead.

Leads: good vs. bad

Good leads are those who are most likely to convert. In sales-driven growth companies, that means that, if engaged by velocity/enterprise sales, a lead will convert. For product-led growth companies with no-touch models, a good lead is one that is likely to achieve is a certain MRR threshold, if activated.

We define good leads this way – instead of, say, the type of leads we want to convert – because we want to create as much velocity & efficiency for our growth-leading teams as possible. If we send a lead to sales that won’t convert no matter how much we want to, we are incurring an operating cost associated to that rep’s time. That counts double for leads that will convert the same amount whether engaged by sales or not, as we are unnecessarily paying out commission.

Product teams focused on lead activation & conversion waste time running experiments with little to no impact if they don’t properly segment between good & bad leads.

Drift’s 5 Growth Experiments segmented by customer fit

Drift used MadKudu to segment their entire onboarding & activation experience so that good leads and bad leads each received the best next action at each step of their buyer journey.

For Drift, onboarding starts before users create an account. Drift Forces the Funnel by identifying website visitors via IP lookup as they arrive on their website, scoring the account associated to the IP, and then personalizing the website experience based on the MadKudu score and segment.

For their best leads, Drift’s messaging is optimized to provide social proof with customer logos & key figures with a core call-to-action to talk with someone. Drift is willing to invest SDR resources in having conversations with high-quality leads because they convert more consistently and at MRR amounts that justify the investment.

For their worst leads – that is, leads that won’t convert if engaged by sales – Drift’s messaging is tailored towards creating an account and “self-activating,” as we’ll see in future experiments.

For major sources of traffic, like their pricing page or the landing page for website visitors who click on the link inside their widget for free users, Drift iterates constantly on how to improve conversion. Some experiments work, like stripping out the noise for low-quality leads to encourage self-activation. Others, such as dropping a chatbot inside the onboarding experience for high-quality leads, don’t get as much traction despite good intentions & hope.

Intentional Onboarding

Sara Pion spent a good amount of time praising the impact of onboarding questions that allow users to self-identify intent. User inputed fields can be very tricky – mainly because users lie – but Drift has found a heavy correlation between successful onboarding and the objective that the user declared they had when signing up.

As users onboard, good & bad users are unknowingly being nudged in two very different directions. Emails, calls to action, and language for good leads are geared towards speaking with a representative. That’s because Drift knows that good users who talk with someone are more likely to convert. Bad users, meanwhile, are encouraged to self-activate. Onboarding emails encourage them to deploy Drift to their website, to use certain features, and generally to do the work themselves. Again, that’s because Drift knows that talking to these users won’t statistically help them be successful – either because they don’t actually need Drift or because they want to use Drift their own way without talking to someone.

Personalize the definition of success for every user

Like most successful SaaS companies, Drift has invested an awful lot of energy making sure that their best leads have the best possible buyer journey; however, unlike most companies, they don’t stop there. They look at how they can optimize the experience for their worst leads as well, recognizing that even a 1% increase in conversion can be the difference between hitting their revenue goals or not given the massive volume of leads they get each month.

Timing is everything: Surfacing sales-ready accounts & the right contacts to engage.

Identifying sales-ready accounts to reach out to is the heart of freemium sales acceleration. Freemium businesses rely largely on product adoption to trigger the a-ha moment that will ultimately lead to successful sales engagement, so quantifying that a-ha moment in the form of activity scoring is crucial.

For Sales SaaS reps with hundreds or thousands of accounts assigned to them, reaching out manually every three months yields little or no results. What are the odds that today is the day that an arbitrary account is ready to buy? Very little, right. Coupled with that is the fact that an account may have dozens of associated contacts to choose from and engage. Randomly picking based off of job titles and reaching out is a spray and pray strategy that yields equally unpredictable results.

Sales wants to know which accounts to reach out and who within that account to reach out. And we’ve got just the play for that.

Timing is everything: Surfacing sales-ready accounts & the right contacts to engage.

Our goal here is for sales reps to start every day with a list of accounts assigned to them that are ready to have a conversation. We’d also like to give that sales rep a filtered list of the contacts most likely to respond. That way sales reps spend all of their time crafting the most relevant message for their best leads.

This is a great play for freemium SaaS businesses with a large number of accounts where velocity & efficiency are key to success. This play is also good for products with a combination of low-touch and high-touch users – while you may have paying customers already, identifying when that paying account is sales-ready is key.

The bulk of this play is going to sit on MadKudu’s ability to build an accurate account-level behavioral scoring model. Structured data is going to be key – wecan’t build an account-level model if we don’t have account data. When we run this play with InVision, we use Segment for product data & HubSpot for marketing data. We’ll also need Salesforce for our sales data.

Once we have our data piping correctly, MadKudu is going to identify the features & activity that best predict sales readiness by looking at historical product & marketing engagement and the resulting sales outcome. Once the predictive model is build, MadKudu sucks down the latest activity data on a regular basis and looks for triggers.

When a sales-ready account is identified, MadKudu tags it in SalesForce and drops it into a daily report for sales reps. Within each sales-ready account, MadKudu’s models also look at the profile & activity of each contact in order to identify the contacts most likely to engage. Those contacts are recommended to the sales rep.

The Impact: +25% in Pipeline

InVision’s 25% increase in pipeline came from identifying accounts programmatically based on historical customer data. This came without any change to the product, just in optimizing for sales-readiness. If you’re looking at lead activity today, you may be leaving money on the table. With InVision, we identified accounts where no single lead achieved the activity threshold for an MQL, but the account as a whole hit the MQA threshold. It is the combined activity of several users that makes an account ready to talk to sales.

Once this MQA model is built, we can begin to add some of our other plays on top of it. Forcing the funnel by sales-ready accounts by customizing the app or website.  Reducing friction on forms or triggering chat for enterprise prospects with a Fastlane play. We can score accounts throughout the buyer journey.

We previously wrote about why activity scoring is so tricky and you can see slides here from a joint talk given by InVision & MadKudu at HubSpot Inbound 2018.

Activity Scoring & Enterprise Free Trials: how to do it right.

Redpoint Ventures Partner Tomasz Tunguz recently published the results of their Free Trial Survey, which included responses from 590 professionals working at freemium SaaS businesses of all sizes and shapes. The survey had many interesting takeaways and I recommend taking the time to dive into the slides that Tunguz shared at SaaStr Annual this year.

One of the more interesting takeaways (that Tunguz discussed on his blog) was that activity scoring seems to negatively impact free trial conversions for high ACV leads. Tunguz found that Enterprise SaaS businesses using activity scoring see a 4% conversion rate for free trials vs. 15% for those not using activity scoring.

MadKudu has written a lot about free trials in the past, including the article that inspired Tunguz referenced in launching his survey, so it was natural for us to weigh in on this conclusion.

I asked for a few clarifications from Tunguz in preparing this article:

  • The survey defined activity scoring as lead scoring that leverages in-app activity (not email marketing, webinar engagement or other activities).
  • The conversion rate (4%/15%) is calculated against all leads, not leads that scored well, so we’re measuring the effectiveness of the funnel not the precision of the score itself.
  • We’re only looking at leads who participate in the free trial, not leads that schedule a demo or otherwise enter the funnel.

With that in mind, I believe there are two main takeaways and some data to support those conclusions.

Enterprise leads don’t want a Free Trial.

summary: our data shows Enterprise prefer to schedule a demo and self-serve prefer a free trial (if available). Putting either in their counterpart’s funnel negatively impacts their likelihood to convert.

Free trial products design enterprise buyer journey calls-to-action – “contact sales” “schedule a demo” &  “request a quote” – in order to entice enterprise prospects. As Tunguz pointed out in the analysis of his survey, enterprise leads don’t typically try before they buy. They may sign up for a free trial to get a feel for the interface and do some preliminary feature validation, but the buying process is more sophisticated than that and lasts longer than your free trial.

One hypothesis for why activity scoring decreases conversion for enterprise leads in free trials is that enterprise leads shouldn’t be running free trials – or at least, they shouldn’t be having the same free trial experience. It is worth reading Tunguz’s piece about assisted vs. unassisted free trials to dive deeper into this subject.

Supporting this hypothesis is an experiment run by Segment & MadKudu looking at the impact of a free trials & demo requests on the likelihood that a self service & enterprise lead would convert. Segment Forced the Funnel by dynamically qualifying & segmenting website traffic, personalizing the website based on the predicted spend. This allowed us to predict whether traffic was self-serve or enterprise.

“Self-serve traffic” would not see the option to schedule a demo while “enterprise traffic” would not see the option to sign up for a trial. They also ran a control to measure the impact on the funnel.

They found a negative correlation between self-serve conversion & requesting a demo. They also found a negative correlation between enterprise conversion & signing up for a free trial.  Each buyer segment has an ideal customer journey and deviating from it (even into another buyer segment’s ideal journey) negatively impacts conversion.

The converse is equally true: pushing leads into their ideal customer journey increases their conversion rate by 30%.

Startups using activity scoring on high ACV leads should work to get high ACV leads out of their free trial by identifying them early on. Algolia, for example, prompts self-serve trial signups who have a high ACV to get in touch with someone for an assisted free trial.

Scoring activity at the account level

For SaaS businesses that go up-market or sell exclusive to enterprise, activity scoring at the lead level may not be sufficient. We worked with InVision to identify sales opportunities at the account level, importing all activity data from HubSpot & Segment and merging at the account level. We analyzed the impact that various user personas had on the buyer journey and product experience.

Profiles that were more likely to be active in the product  – Marketing, analysts & designers – had a less than average impact on the likelihood to convert. Personas associated with higher likelihood to convert – Directors, founders, CEOs – had a smaller impact on activation.

Multiple personas are needed to create optimal conditions for an account to activate & convert on InVision. Their marketing team uses this knowledge to focus on post-signup engagements that will increase the Likelihood to Buy, the behavioral score built by MadKudu.

We see similar findings in the buyer journey as we examine how various personas’ involvement in an account impacts opportunity creation vs. opportunity closed-won. Opportunities are more likely to be created when marketers & designers are involved, but they are more likely to close when CEOs & Directors get involved.

For InVision, interestingly enough, founders have a smaller impact on opportunity closed-won than they do on product conversion.

While a single lead may never surpass the activity threshold that correlated with sales readiness at InVision, scoring activity at the account level surfaced accounts that exceeded the account activity threshold. Both thresholds were defined by MadKudu & InVision using the same data sources.

The above slides are all from our HubSpont Inbound 2018 talk and are available here.

Measuring Scoring Model effectiveness

Looking at the results of experiments run with our customers and the data from Tunguz’s survey, it’s clear that activity scoring doesn’t work in a vacuum. Both our MQA model for InVision & our models for Segment require firmographic, technographic and intent data in combination with behavioral data in order to build a predictive model.

The impact that a model will have on sales efficiency & velociate depends on its ability to identify X% of leads that represent Y% of Outcomes. The power of this function increases as X tends to 0 and Y tends towards 100. “Outcomes” can represent opportunities created, opportunities won, pipeline created, or revenue, depending on the metric your sales strategy is optimizing for.

We similarly expect that the X% of leads will convert at a significantly higher percentage than lower-quality leads. As seen in the above graphic, a very good lead may be 17x as likely to convert than a low quality lead, which makes a strong case for sales teams prioritizing very good leads as defined by their predictive model – at least if they want to hit quota this quarter.

If you’re selling exclusively to enterprise leads, an assisted free trial operates a lot like a schedule a demo flow – you will score leads firmographically early on, evaluate the opportunity, and then assist them in onboarding to your product to trial it, scoring activity throughout the trial to evaluate likelihood to convert.

Most SaaS don’t sell exclusively offer free trials to high ACV leads, which is why activity scoring becomes crucial. A lead that is a great fit for self-service is also a bad fit for enterprise. Self-service leads convert quickly and consistently with a small to medium sized ACV, whereas enterprise leads have a small percentage chance of converting, but convert with a much higher ACV. Velocity sales requires a steady stream of quick conversions – a long sales cycle for low ACV is a loss – while enterprise sales can take months or years to close a single account while still being a success.

For customers with velocity & enterprise leads, MadKudu scores every lead against both models, which enables us to identify whether it’s a good fit for self-serve, enterprise, both or none (it’s almost never both).

Re-Get That Bread: Retarget qualified website traffic that just didn’t convert.

90% of your website traffic doesn’t convert, and there’s nothing worse than a missed opportunity. For b2b companies, retargeting is a no-brainer. It’s an easy way to make sure you’re always targeting an audience that has showed some intent to buy. However, the problem with retargeting anonymous website traffic broadly is that you don’t know who you are targeting and how qualified they are.

With just a high volume, many SaaS companies bid low on retargeting across their entire website traffic. They push their brand in front of their website traffic wherever they go. This spray and pray tactic means SaaS companies are only getting in front of traffic that other advertisers aren’t willing to pay more. Do you think your competitors may have a more focused strategy, outbidding you on your best leads and leaving the rest to you?

Click through rate is low because the quality filter is low. Conversion rate is low because most of your website traffic shouldn’t convert (candidates, low-quality leads, investors, analysts, perusers).

That is, of course, unless you only target leads that should convert in the first place. We already know MadKudu can handle qualifying anonymous traffic, so why not retarget it as well?

Re-Get That Bread: Identify, Qualify, Retarget

Our goal here is to focus our retargeting budget on the subsection of our website traffic that is worth the most to us. If we do that, we will be able to reallocate the budget we’re not spending on low-quality traffic to bidding more for our high-quality traffic.

We’ll need a few tools to Re-Get That Bread:

  • IP lookup: we’ll be using Clearbit Reveal for this.
  • Qualification: we’ll be using MadKudu for this.
  • Retargeting: we’ll be using Adroll for this.

As usual, we’ll be connecting this all through Segment.

Qualifying traffic has become pretty easy with the advent of IP Lookup APIs – the most popular being Clearbit Reveal. Feed Clearbit an IP address and it returns (among other things) the domain of the company or of the individual visiting your website. This is enough to score an account. We’ll be scoring with MadKudu, but you can also do it with your homegrown Lamb or Duck lead scoring model. We’ll send MadKudu the domain name provided by Clearbit, which will return a few important data points for this play:

  • Customer Fit segment: very good, good, medium, low
  • Predicted Spend: custom based on your specific pricing plans, predicting which
  • Topical Segmentation: custom based on your target segments (e.g for Algolia: ecommerce, media, SaaS).

With this data we’re able to feed AdRoll a custom audience of qualified traffic to target. This can be a bit tricky since AdRoll requires a static audience, but a quick script to update a static audience on a daily basis will get us around that hiccough.

Based on predicted spend, we can even build separate audiences for our various plans, each with different budgets. If we add in Topical Segmentation, we can run targeted messaging to our various ICPs based on their needs at various price points. If we know the predicted value of the qualified traffic, we can calculate our maximum budget as a function of our acceptable CAC.

The Impact: +300% click-through rate

When Chris Rodriguez at Gliffy first began building this play, he was looking to get click through rate for his retargeting ads under control. When he saw it jump from the .7% industry average to 2-3% for qualified traffic, it became pretty clear that qualified traffic was worth the focus.

Bidding higher on a qualified audience is a no-brainer: we see it on ad networks that boast a qualified audience or a qualified system of manual segmentation. It only makes sense that we would apply the same logic to how we retarget our own audience: we want to spend more on the audience that matters, the ones that got away.

Identify, Qualify & Segment website visitors with a personalized website experience.

Your website is the story you choose to tell: to prospects, to candidates, to investors, to journalists & analysts. Everyone who wants to know how you talk about yourself goes to your website. Your website starts off simple: you speak authentically to your Ideal Customer Profile (ICP). You make it easy for them to understand your differentiation, pricing, and how to get in touch with you.

Then you grow and begin to sell to different businesses with different budgets and different needs. Telling a single story to a single user therefore becomes increasingly complicated. Should your core message focus on enterprise or self-serve? Should your CTAs direct to ‘create a free account’ or ‘schedule a demo’? How important is it to make pricing easily accessible vs. documentation for how to get started?

Identifying, qualifying & segmenting your prospects with personalized messaging can be a full-time job for SaaS companies. This MadKudu play, however, can take a lot of the pain out of rapid experimentation.

Force the Funnel: Identify, Qualify, Personalize.

Our goal with Force the Funnel is to provide the optimal website experience for every qualified account. This play is great for SaaS businesses selling both to self-serve & enterprise. It also helps if you’re targeting distinctly different customer segments (e.g: financial services & luxury goods). In order to achieve this play, we’ll need three things:

  • IP lookup: we’ll be using Clearbit Reveal for this.
  • Lead Scoring: we’ll be using MadKudu for this.
  • Website personalization: we’ll be using Intellimize for this.

We’ll also be connecting all of these through Segment as usual. Let’s dive in and see what happens:

Focus on qualified traffic

First and foremost, we are going to focusing our efforts on personalizing our site for qualified traffic. The two reasons behind that are:

  1. We don’t want to measure success based on how personalization affects unqualified traffic.
  2. We don’t want to spend resources trying to help unqualified traffic convert better.

Qualifying traffic has become pretty easy with the advent of IP Lookup APIs – the most popular being Clearbit Reveal. Feed Clearbit an IP address and it returns the visitor’s company. This is enough to score an account. We’ll be scoring with MadKudu, but you can also do it with your homegrown Lamb or Duck lead scoring model. We’ll send MadKudu the domain name provided by Clearbit, which will return three important data points:

  • Customer Fit segment: very good, good, medium, low
  • Predicted Spend: custom based on your specific pricing plans, predicting which
  • Topical Segmentation: custom based on your target segments (e.g for Algolia: ecommerce, media, SaaS).

Now that we’ve identified, qualified & segmented our audience, we’re ready to personalize our site. There are a lot of personalization/experimentation/testing platforms. We’re using Intellimize here because we want Intellimize to do all the heavy-lifting of designing and running experiments. Intellimize uses machine learning to generate, analyze & optimize experiments. They also pull up some pretty interesting insights around how different personas behave.

The Impact: +30% conversion rate

Segment found that removing buttons linking to the pricing page for qualified enterprise accounts, they increased conversion to demo scheduling by 30%. We’re optimizing the upside by focusing on improving the buyer experience for qualified traffic. This dovetails nicely into other Fastlane plays via chatbots, lead capture forms & gated content.

If you’re running A/B tests on your entire traffic, you may be skewing your results & analysis in favor of what unqualified traffic does (see: Segmenting Funnel Analysis by Customer Fit). The key impact here is that we’re segmenting qualified traffic with AI-driven experimentation meant to optimize for the results we want: more demo requests, more signups, more leads captured.

Allow qualified demo requests to book a meeting with you

There is no better lead than an inbound sales request. The intent is high & clear: they want to evaluate you as a vendor. Studies show the last thing a prospect does before buying is talk to sales. The only real question SaaS companies ask themselves is “do we want them as a customer?”

Perhaps this is why most demo request forms act as intentional hurdles. They require qualified traffic to prove themselves worthy to speak to sales, because  reps don’t want to waste time on low-quality leads. This creates unnecessary friction in the buyer journey. Once leads fill in nine fields (on average), they wait 5-7 days before speaking with a rep. 50% of buyers say they choose the first vendor they talk to. Is that friction really something an organization can afford?

Imagine if we were to re-design the buyer journey to provide the best experience for high-quality leads. Undoubtedly we would ask for as little information as possible. It should be as easy as possible for leads to book a meeting with sales.

We can’t block low-quality leads from coming to our website, but MadKudu & Calendly make for a pretty powerful combo for this quintessential pipeline growth play: The Original Fastlane.

Removing friction & adding pipeline

The goal of this play is to give qualified leads access to a sales rep’s calendar so they can book time immediately. We want them to skip the form-filling and email back-and-forth. In order to execute this play, we’ll need to add MadKudu Fastlane to our demo request form, and we’ll need to leverage a scheduling tool like Calendly.

 

MadKudu needs an email address to score a lead, so we’ll want to make our email input one of the first fields in our demo request form. FastLane bundles two important steps together:

  1. It sends the email to MadKudu to score & qualify as the prospect fills in the form.
  2. If qualified, MadKudu FastLane triggers a message that let’s the buyer book a meeting with a sales rep directly.

Designing the buyer journey for your best leads

One of the more creative implementations for this comes from Outreach, who hides all unnecessary fields on their forms.

Outreach has hidden all fields that are non-essential to follow-up, leaving only email & phone number. If MadKudu qualifies the lead as a good fit, Outreach let’s the lead submit their slimmed down form. What a great buyer experience.

When MadKudu identifies an unqualified lead, MadKudu dynamically adds new fields to Outreach’s lead capture form. Unqualified traffic is often the result of a personal email address, so extra information may provide pertinent context.

The Impact: +60% Pipeline

Re-imagining lead forms is a quick win that can have a big impact. We underestimate the impact of operational friction between us & qualified leads. Reducing form fields and eliminating email tag means that you’re talking to more qualified leads faster. Segment increased their pipeline by 60%, and uDemy closed an enterprise client in 24 hours within weeks of deploying this play.

If you’re looking to hit your pipeline goals this quarter, start by looking at your existing forms: are your best leads getting the best experience?

Training Facebook to bid on your best leads

The Facebook Pixel is the gold standard of paid acquisition because of its powerful targeting AI. Retailers, for example, feed transactional data into Facebook’s AI to train its bidding engine. Then, Facebook optimize bidding for consumers who are most likely to buy from that retailer. The nearly instantaneously feedback loop enables fast iteration on paid acquisition strategies. You should never be bidding more on a lead than what they are worth to your business.

Facebook Pixel has some limitations, though, which can make it difficult for SaaS companies to fully leverage. Facebook only holds onto data from the past 28 days. This means that purchase data from sales cycles that take longer than 28 days cannot be fed back into Facebook’s AI. Lastly, Facebook’s AI learns faster when events happen sooner. There is a huge incentive for SaaS companies in particular to optimize towards an event higher in the funnel.

Madkudu’s AI is training Facebook’s AI.

This raises a bit of an issue for SaaS companies. Most are spraying and praying ad dollars. they bid low on a massive audience because they are unable to identify and optimize for high-quality leads.

Fast-growing SaaS companies like Drift are doing things differently. They feed MadKudu data into Facebook’s AI, enabling them to optimize bidding against leads which MadKudu would score high. In short, Madkudu’s AI is training Facebook’s AI.

Translating MadKudu data for Facebook

The goal is to feed transactional data to Facebook that it can use to optimize bidding against leads that we want. MadKudu’s predictive score identifies a lead’s value at the top of the funnel. We just need to capture that lead data as early as possible and send it in a way that Facebook understands.

There are two main attributes that Facebook is looking for to train its AI – an individual and its “value.” For eCommerce, that typically means feeding a purchase back to Facebook; however, we need to adapt our value a bit.

MadKudu is good at predicting the amount that a lead will spend based on historical deal data. This helps us differentiate between self-serve and enterprise leads, for example. Of course, not all leads will convert (even the very good ones), so in order to create our predicted value to send back to Facebook, we can adjust the predicted spend by the likelihood to convert (two variables MadKudu generates natively for all leads). The result is the following:

Lead Value = % likelihood to convert  x Predicted Spend

If a lead has a 10% chance to convert to $30,000 in ARR, we can send Facebook a “transaction” worth $3000 as soon as the lead gets generated. Now we can send data to Facebook nearly immediately to train its model. We’re training Facebook’s AI to value the same type of leads that we are valuing internally using MadKudu.

The easiest way to capture lead information with MadKudu is to use the MadKudu FastLane. It’s a simple line of javascript that turns any lead form into a dynamic customer fit-driven lead capture device. The same mechanism that helps Drift convert more leads into demo calls is training Facebook’s AI. Not bad.

The Impact: 300%

For Drift, the impact was clear and immediate. A 300% increase on conversion from Facebook spend means hitting a larger audience for cheaper. With MadKudu Fastlane sending transactional data back to the Facebook Pixel (via Segment), Drift is enabling Facebook to only spend on leads that MadKudu will score well, which Drift already knows to work well in predicting conversion to customers.

By building its growth & marketing foundation on top of MadKudu as a unified metric for predicted success, Drift is able to extend MadKudu to its paid acquisition leveraging our API and our many integrations. Connecting Facebook Pixel and MadKudu to Segment takes minutes. Afterwards, Drift can easily pipe MadKudu data to the Facebook Pixel in real-time.

Get in touch to learn more here.

The Three Stages of Lead Scoring: Lambs, Ducks & Kudus

In the past year we talked with hundreds of SaaS companies about their marketing technology infrastructure. While we’re always excited when a great company comes to us looking to leverage MadKudu in their customer journey, we’ve noticed two very common scenarios when it comes to marketing technology. We have seen (1) companies looking to put complicated systems in place too early, and (2) companies that are very advanced in their sales organization that are impeding their own growth with a basic/limiting MarTech stack.

I want to spend some time diving into how we see lead scoring evolving as companies scale. It is only natural that the infrastructure that helped you get from 0-1 be reimagined to go from 1-100, and again from 100-1000 – and so on.

Scaling up technical infrastructure is more than spinning up new machines – it’s sharding and changing the way you process data, distributing duplicate data across multiple machines to assure worldwide performance.

Likewise scaling up sales infrastructure is more than hiring more SDRs – it’s segmenting your sales team by account value (SMB, MM, Ent.), territoriy (US West, US East, Europe, etc.) and stage of the sales process (I/O SDR, AE, Implementation, Customer Success).

Our marketing infrastructure invariably evolves – the tiny tools that we scraped together to get from 0-1 won’t support a 15-person team working solely on SEO, and your favorite free MAP isn’t robust enough to handle complex marketing campaigns and attribution. We add in new tools & methods, and we factor in new calculations like revenue attributions, sales velocity & buyer persona, often requiring transitioning to more robust platforms.

With that in mind, let’s talk about Lead Scores.

Lead Score (/lēd skôr/) noun.

A lead score is a quantification of the quality of a lead.

Companies at all stages use lead scoring, because a lead score’s fundamental purpose never changes. A lead score is a quantification of the quality of a lead. The fundamental output never really changes: the higher the score, the more quality the lead.

How we calculate a lead score evolves as a company hits various stages of development.

Stage 1: “Spam or Lamb?” – Ditch the Spam, and Hunt the Lamb.

Early on, your sales team is still learning how to sell, who to sell to, and what to sell. Sales books advize hiring “hunters” early on, who will thrive off of the challenge of wading through the unknowns to close a deal.

Any filtering based on who you think is a good fit may push out people you should be talking to. Marketing needs to provide hunters with tasty lambs they can go after, and you want them to waste as little time on Spam as possible (a hungry hunter is an angry hunter).

Your lead score is binary: it’s too early to tell a good lamb from a bad lamb, so marketing serves up as many lamps as possible to hunters so they can hunt efficiently. Stepping back from my belabored metaphor for a second, marketing needs to enable sales to follow-up quickly with fresh, high quality leads. You want to rule out missing out on deals because you were talking to bad leads first, so that you begin to build a track record of what your ideal customer profile looks like.

Lambs vs. Spam

Distinguishing between Lambs (good leads) & Spam (bad leads) will be be largely based on firmographic data about the individual and the company. A Lamb is going to be either a company with a budget or a title with a budget: the bigger the company (more employees), or the bigger the title (director, VP, CXO), the more budget they are going to have to spend.

Spam, meanwhile, will be visible by its lack of information, either because there is none or because it’s not worth sharing. At the individual level, a personal or student email will indicate Spam (hunters don’t have time for non-businesses), as will a vSMB (very small business). While your product may target vSMBs, they often use personal emails for work anyway (e.g: DavesPaintingCo@hotmail.com) and when they do pay, they just want to put in a credit card (not worth sales’ time).

Depending on size of the funnel and the product-market fit, this style of lead score should cover you up until your first 5 SDRs, your first 50 employees, until you pass 100 qualified leads per day or up until $1 Million ARR.

Stage 2: “If it looks like a Duck” – point-based scoring.

Those lambs you hunted for 12-18 months helped inform what type of leads you’re going after, and your lead score will now need to prioritize leads which most look like your ideal customer profile (ICP): I call this “if it looks like a Duck.”

Your Duck might look something like (A)Product Managers at (B)VC-backed, (C)US-based (D)software businesses with (E)50-200 employees. Here our duck has five properties:

  • (A) Persona = Product Managers
  • (B) Companies that have raised venture capital
  • (C) Companies based in the United States
  • (D) Companies that sell software
  • (E) Companies with 50-200 employees

Your lead score is going to be a weighted function of each of these variables. Is it critical they be venture-backed, or can you sell to self-funded software businesses with 75 employees as well? Is it a deal-breaker if they’re based in Canada or the U.K.?

Your lead score will end up looking something like this:

f(Duck) = An₁ + Bn₂ + Cn₃ + Dn +En

Here n₁…₅ are defined based on the importance of each attribute.

Lead’s that look 100% like your ICP will score the highest, while good & medium-scoring leads should get lower prioritization but still be routed to sales as long as they meet at least 1 of your critical attributes and 1-2 other attributes.

You can analyze how good your lead score was at predicting revenue on a quarterly basis by looking at false positives & false negatives.

This lead score model will last you for a while with minor tweaks and adjustments; however, one of a number of things will eventually happen that will make your model no longer effective:

Complex Sales Organization

A complex sales organization comes from having a non linear sales process (i.e: “they look like our ICP, so they should talk to sales”). Here are a few examples (although not exhaustive):

You may begin selling to different market segments with a tiered sales team: your point-based lead scoring system only works for one market segment, or you’ll have to adjust attributes to continually adapt as you tier your sales team, instead of adapting to their needs for increased sales velocity.

You may begin upselling at scale: a good lead for upsell is based not on their firmographic profile but on their behavioral profile: point-based behavioral attributes won’t work for new leads and the score is often a result of aggregate behavior across multiple users & accounts, too complex to map to a point-based lead score model (this is often called Marketing Qualified Accounts).

You may begin to find that a majority of your revenue is coming from outside your ICP, no matter how you weigh the various attributes. If you only accept leads that fit your ICP, you won’t hit your growth goals. Great leads are coming in that look nothing like what you expected but you’re still closing deals with him. Your ICP isn’t wrong, but your lead score model needs to change. We’ve written about this in depth here.

When that happens, you’ll need to move away from a manual management of linear model to a more sophisticated model, one that adapts to the complex habitat in which your company now operates and wins by being smarter.

Stage 3: “Be like a Kudu” – Adapt to your surroundings

As your go-to-market strategy pans out and you begin to take market share across multiple industries/geos/company sizes, the role of your lead score will stay the same: fundamentally, it should provide a quantitative evaluation of the quality of each lead.

Different type of leads are going to require different types of firmographic & behavioral data:

  • Existing customers: product usage data, account-level firmographic data
  • SMB (velocity) leads: account-level firmographic data.
  • Enterprise leads: individual-level firmographic data across multiple individuals, analyzed at the account level.

Your model should adapt to each situation, ingest multiple forms of data, and contextually understand whether a lead is a good fit for any of your products or sales teams. As your product evolves to accommodate more use cases, your lead scoring model needs to evolve regularly, ingesting in the most recent data and refreshing accordingly.

Predictive Lead Scoring

Predictive lead scoring adapts to the needs to growth stage b2b businesses because the model is designed to predict a lead’s likelihood to convert based on historical data, removing the need to manually qualify leads against your ICP.

Predictive Lead Scoring models are like Kudus: they are lightning fast (did you Kudus can run 70km/h?) and constantly adapt to the changing environment.

Kudus are active 24/7 (they never sleep), and their distinct coloration is the the result of evolving to adapt to their surroundings & predators.

The advantage of a predictive lead scoring model is that the end result remains simple – good vs. bad, 0 vs. 100 – regardless of how complex the inputs get – self-serve or enterprise, account-based scoring, etc.

Operationalizing a predictive lead scoring model can be time-intensive: ingesting new data sources as the rest of your company infrastructure evolves and takes on more tools with data your model needs, refreshing the model regularly, and maintaining marketing & sales alignment.

Making the switch to a predictive lead scoring model only truly makes sense when your sales organization has reached a level of complexity that requires it to sustain repeatable growth.

“Where should my business be?”

Now that we’ve looked at how lead scoring models evolve as your marketing & sales organization grows, let’s come back to our initial conversation about what type of model you need for your current business problems.

As businesses scale, some buy a tank when a bicycle will do, while others a trying to make a horse go as fast as a rocket. We’ve put together a quick benchmark to assess the state of your go-to-market strategy and where your lead scoring model should be at.

Some companies can stick with a Lamb lead scoring model up through 50 employees, while others need a predictive lead scoring model at 75 employees. While there are some clear limiting factors like sales organization complexity and plentiful historical data, understanding the core business problem you’re trying to solve (in order to scale revenue) will help guide reflection as well.

Why Lead Scores don’t reflect your Ideal Customer Profile

Marketing & sales alignment is fragile. Sales pushes back on leads whose scores diverge with their intuition: “why am I getting assigned to a lead based in India. We never close deals in India.” “Why is this lead scored low? We’re supposed to be going after accounts just like this.”

When sales pushes back on lead scoring, they lose confidence in the lead score. They stop using it to prioritize outreach, and don’t followup with good leads sent their way. Marketing feels frustrated about their work not being valued and they see increasing MQL disqualification and reduced conversion rates from MQL to Opportunity. Each side blames the other.

As we’ve discussed this problem with some of the best marketing ops leaders in the software industry, a common source of disconnect was a fundamental misunderstanding of the relationship between Lead Scores & Ideal Customer Profile (ICP). 

Time & time again, marketing & sales teams expect that leads who score the highest should be the ones that most look like their ICP, and that’s false. 

Few teams have explicitly discussed this, so let’s dive in.

Defining your ICP & Lead Score

You’ve done your persona research. You know everything about Grace the Growth Guru, Frank the Finance Freak or Sheila the Sales Sherpa (persona researchers love alliteration). You know exactly the type of customers you want to go after, so you build out your ICP – company size, geography, industry, revenue, integrations – as a function of the type of customer you want to go after. Great.

Your ICP will help guide you in your product roadmap – “What does Molly need?”, “How does this bring value to CompanyX?” – as well as your marketing & sales strategy.

Your ICP is the goal. It’s where you want to go. It can and should be informed by the past (data), but it is a representation of where you want to go, not where you are.

A Lead Score, meanwhile, is a quantifiable valuation of the quality of a lead. In early stage companies, it is often used to weed out spam and elevate big name VIPs to the top. As a company grows, the sales process complexities increase: tiered sales teams for self-serve vs. enterprise, geo-specific assignment, mixed inbound/outbound strategy & growth teams competing against both.

Any lead that looks identical to 100 leads that all turned into opportunities should be routed to sales and prioritized with a VIP treatment. Any lead that looks identical to 100 leads that stick to a free plan or have long sales cycles for low deal amounts should be ignored or prioritized as low importance.

When Lead Score & ICP differ in opinion and why.

“This lead is garbage”

Intuition is a powerful thing, and often serves sales people well as they build relationships with prospects in order to help them solve a core problem; however, sales people interact with <1% of all leads and their sense of lead quality is often based on a single qualitative data point. When a lead is scored high that “doesn’t look good,” it usually comes down to a single data point.

Last year we encountered a sales teams who wanted to override the score for leads based in India. They believed the market was not valuable to them, both in terms of available budget and operational costs. 10% of their new revenue in the previous quarter came from India. When they understood that, they asked that the country to be hidden from their sales team.

There’s a lot to unpack here. Of course it’s not good to have a sweeping bias about an entire country, especially when it is to the detriment to your sales goals. For this company, we’re also not saying that all leads from India should get prioritization. We’re saying that they should prioritize 100 good leads from India, just like 100 good leads from anywhere should.

Intuition is powerful, but Data doesn’t lie. Marketing has a responsibility not only to be data-driven, but to make the insights of that data available to all customer-facing teams. Modern marketing teams can enable modern sales teams not by providing them with 100 data points about every lead, but by providing a few key data points that explain why a lead gets scored the way it does.

At MadKudu we call that Signals and it looks like this:

The combination of relevant firmographic & behavioral data points let’s the sales team know why a lead scored a 92.

Inflection Points

Launching into new verticals & markets can present a real conundrum for your lead score. Historically, leads from, say, Japan, have not converted (because you weren’t targeting the Japanese market, weren’t compliant or a well-suited option), but this year you’re pushing into Japan and your upcoming campaign should bring in hundreds of new leads from Japan. You have expanded your ICP but your Lead Score is still measuring the likelihood of conversion based on historical data.

The same problem arises as companies go up market, selling to increasingly large businesses. The added complexity here is that enterprise sales fundamentally looks different than velocity sales, so even if you’ve closed some enterprise clients in the past, your lead score may be heavily skewed towards velocity sales, making it hard to surface enterprise leads. You’ve expanded your ICP to include a new breed of business.

Overcoming Predictive Bias & Training your Model

ICP & Lead Scores digress at inflection points. Fast-growing businesses need to increasing existing market share at the same time as the seek to expand into new markets. Among leading go-to-market teams, we’ve observed two trends that make this combination possible at scale.

Creating dedicated Business Development & Growth teams whose purpose is to bypass lead score and focusing on new market development areas, booking meetings directly for AEs. BDRs & Growth teams build up historical data over 3-6 months that can be used to retrain your lead score to account for the new market data.  

Another method is to create hard exclusions to override your lead score. If your lead score is operationalized across the entire buyer journey, this is a quick way to experiment with new markets in an automatic way, but this should be rare. Hard exclusions are like a blindfold for predictive lead scores – you’re removing one of many signals from the equation, increasing the likelihood of false positives. As you make strategic changes to your business, they may be necessary for overcoming predictive bias.

Actionable Definitions

It is vital to have a common understanding across marketing & sales around what these tools are, how they are made, and how they should be used. While your Ideal Customer Profile paints a picture of who your vision will serve in the next year, your lead score needs to be the best way for sales to prioritize outreach.

I’ve compiled a quick chart based on what we’ve seen from customers to illustrate the differences between ICP & Lead Score.  

This is something you can use to start a conversation at your next marketing & sales meeting about how Outbound Sales Strategy should be informed by your ICP or how you can increase forecast revenue for the next quarter based on the number of highly qualified leads you’re bringing in.

Building a Shadow Funnel

Marketing is becoming an engineer’s game. Marketing tools come with Zapier integrations, webhooks and APIs. Growth engineers finely tune their funnel, each new experiment – an ebook, a webinar, ad copy or a free tool – plugging into or improving upon the funnel.

Growth engineers fill their top of their funnel by targeting prospects who look like they are a good fit for their product, but haven’t engaged yet. Guillaume Cabane, VP Growth at Drift, has been sharing his experiments leveraging intent data for years. Intent data allow Guillaume to discern the intentions of potential buyers by providing key data points into what they are doing or thinking about doing.

A quick review of the three main categories of Intent Data

  • Behavioral Intent: This includes 1st party review sites like G2Crowd, Capterra & GetApp, as well as Bombora, which aggregates data from industry publications & analysts. They provide Drift with data about which companies are researching their industry, their competitors, or Drift directly. (e.g: “Liam from MadKudu viewed Drift’s G2Crowd Page”)
  • Technographics: Datanyze, HGData & DemandMatrix provide data about companies that are installing & uninstalling technologies, tools or vendors (e.g: “MadKudu uninstalled Drift 30 days ago:)
  • Firmographics: Clearbit, Zoominfo & DiscoverOrg offer data enrichment tools starting from a website domain or email, providing everything from headquarter location to employee count.

In a standard buyer journey, the right message and medium depends on where a prospect is in the funnel:

  • Awareness: do they know about the problem you solve?
  • Consideration: are they evaluating how to solve a problem?
  • Decision: are they evaluating whether to use you to solve their problem?

Drift began looking at whether we could help them determine the next best action for every prospect and account in their total addressable market (TAM). TAM can be calculated as the sum of all qualified prospects who have engaged with you (MQLs) + all qualified prospects who have not engaged with you.

TAM = MQLs + SMQLs

I’ll call the latter Shadow MQLs (SMQLs), more precisely defined as any prospect that is showing engagement in your industry or in one of your competitors, but not you.

Drift already leveraged MadKudu to determine when & how to engage with MQLs in their funnel, but they needed to automate the next best action for SMQLs. Should a sales person call them? Or should Drift send them a personalized gift through Sendoso?

Our strategy for determining the next best action involved mapping intent data to the standard buyer journey stages. By doing this, we could build what I call a Shadow Funnel.

For this experiment, we focused on four intent data providers:

  1. G2Crowd: a review site that helps buyers to find the perfect solution for their needs. They send Drift data about who is looking at their category (live chat) or Drift’s page.
  2. SEMRush: a tool that provides information about the paid marketing budget of accounts.
  3. Datanyze: this gives us information about what tech are being used on websites.
  4. (Clearbit) Reveal: tells us the accounts that are visiting our website.

In order to build our shadow funnel, we need to define Shadow stages of the buyer journey:

  • Awareness: understands the industry you operate in.
  • Consideration: looking at specific vendors (not you).
  • Decision: evaluating specific vendors (not you).

MadKudu’s role in this funnel is to determine whether the SMQL is showing High, Medium, or Low predicted conversion. Here is a table illustrating the data points we mapped to each stage & fit level:

By matching Datanyze & G2Crowd data, for example, Drift can identify accounts who have uninstalled one of Drift’s competitors in the past 30 days and have begun researching the competition. Without ever visiting a Drift property (which would, in turn, enter them into Drift’s real funnel), MadKudu predicts a high probability that this account is in the process of considering a new solution in their space.

With a traditional funnel, the goal is to fill it and optimize for conversion down-funnel. Awareness campaigns drive traffic, acquisition campaigns drive email capture, and conversion campaigns increase sales velocity & conversion.

The goal of the Shadow Funnel is the opposite. Drift wants the funnel to be empty and to have everyone who is in it churn out.

Rephrasing our previous TAM equation, we can state the following:

TAM = Funnel + Shadow Funnel

Anyone who is in your TAM that isn’t in your funnel is in your Shadow Funnel, and anyone who is in your TAM that isn’t in your Shadow Funnel is therefore in your Funnel.

The goal then becomes to move horizontally:

  • we want Shadow prospects to move from Shadow Aware (i.e: aware of the industry) to Aware (of you).
  • we want prospects at the Shadow Decision stage (i.e: deciding which tool to use, that isn’t yours) to move to the Decision phase (i.e: deciding whether or not to use you).
  • And so on.

Once you know where your target audience is in the buyer process, you can deliver targeting messaging to pull them from the Shadow Funnel into your funnel.

Next Steps: evaluating intent as predictive behavior.

For now, the Shadow Funnel is a proof of concept. Through this method, Drift identified 1,000+ new qualified accounts to engage with. Once we have some historical data ti play with, our next step will be to build a model to determine which intent data sources are best at predicting Shadow Funnel conversion. We’ll also want to look at which engagement methods show the most promise.

Can the same engagement tactics that work on the traditional funnel work on the Shadow Funnel? Does the thought leadership retargeting ad on LinkedIn have the same impact if an account has never engaged with you before? Does looking at a category on G2Crowd reliably predict whether you’re interested in considering our product?

We are excited to continue to explore this with Drift and other SaaS companies leveraging intent data to engage qualified prospects who need their product before prospects engage with them. This is a natural evolution of the B2C strategies that eCommerce & travel companies have been employing in previous years, but tailored towards helping companies looking for answers get those answers faster.

We’ll be talking more about this strategy with Drift & Segment on our upcoming webinar here.