Training Facebook to bid on your best leads

Facebook Ads has become the gold standard of paid acquisition because of Facebook’s powerful targeting algorithm. Retailers, for example, feed transactional data into Facebook’s algorithm to train its bidding engine. Then, Facebook optimize bidding for consumers who are most likely to buy from that retailer. The nearly instantaneously feedback loop enables fast iteration on paid acquisition strategies. You should never be bidding more on a lead than what they are worth to your business.

Facebook Ads have some limitations, though, which can make it less powerful for B2B companies. Facebook only holds onto data from the past 28 days. This means that purchase data from sales cycles that take longer than 28 days cannot be fed back into Facebook’s algorithm. Lastly, Facebook’s algorithm learns faster when events happen sooner, which again works against longer sales cycles.

Madkudu’s AI is training Facebook’s AI.

This raises a bit of an issue for SaaS companies. Most are spraying and praying ad dollars. They can only optimize on clicks instead of lead value, because they are unable to identify and optimize for high-quality leads.

Fast-growing SaaS companies like Drift are doing things differently. They feed MadKudu data into Facebook’s algorithm, enabling them to optimize bidding against leads which MadKudu would score high. In short, Madkudu’s AI is training Facebook’s AI.

Translating MadKudu data for Facebook Ads

The goal is to feed transactional data to Facebook that it can use to optimize bidding against leads that we want. MadKudu’s predictive score identifies a lead’s value at the top of the funnel. We just need to capture that lead data as early as possible and send it in a way that Facebook understands.

There are two main attributes that Facebook is looking for to train its AI – an individual and its “value.” For eCommerce, that typically means feeding a purchase back to Facebook; however, we need to adapt our value a bit.

MadKudu is good at predicting the amount that a lead will spend based on historical deal data. This helps us differentiate between self-serve and enterprise leads, for example. Of course, not all leads will convert (even the very good ones), so in order to create our predicted value to send back to Facebook, we can adjust the predicted spend by the likelihood to convert (two variables MadKudu generates natively for all leads). The result is the following:

Lead Value = % likelihood to convert  x Predicted Spend

If a lead has a 10% chance to convert at $30,000 in ARR, we can send Facebook a “transaction” worth $3,000 as soon as the lead gets generated. Now we can send data to Facebook nearly immediately to train its model. We’re training Facebook Ads algorithm to value the leads, without having to wait for the sales cycle to close.

To pass the information between MadKudu and the Facebook Pixel, we used the MadKudu FastLane, which Drift already had on their website. It’s a simple line of javascript that turns any lead form into a dynamic customer fit-driven lead capture device. The same mechanism that helps Drift convert more leads into demo calls is also training Facebook’s algorithm, no extra coding required.

The Impact: 300%

For Drift (in a test run in partnership with Lightning.ai), the impact was clear and immediate: a 300% increase on conversion from Facebook. With MadKudu Fastlane sending transactional data back to the Facebook Pixel, Drift is enabling Facebook to only spend on leads that MadKudu scores well, which Drift already knows to work well in predicting conversion to customers.

By building its growth & marketing foundation on top of MadKudu as a unified metric for predicted success, Drift is able to extend MadKudu to its paid acquisition (in addition to their bottom of funnel use cases). Connecting MadKudu to the Facebook Pixel only takes a few minutes via the MadKudu FastLane.

Get in touch to learn more here.

The Three Stages of Lead Scoring: Lambs, Ducks & Kudus

In the past year we talked with hundreds of SaaS companies about their marketing technology infrastructure. While we’re always excited when a great company comes to us looking to leverage MadKudu in their customer journey, we’ve noticed two very common scenarios when it comes to marketing technology. We have seen (1) companies looking to put complicated systems in place too early, and (2) companies that are very advanced in their sales organization that are impeding their own growth with a basic/limiting MarTech stack.

I want to spend some time diving into how we see lead scoring evolving as companies scale. It is only natural that the infrastructure that helped you get from 0-1 be reimagined to go from 1-100, and again from 100-1000 – and so on.

Scaling up technical infrastructure is more than spinning up new machines – it’s sharding and changing the way you process data, distributing duplicate data across multiple machines to assure worldwide performance.

Likewise scaling up sales infrastructure is more than hiring more SDRs – it’s segmenting your sales team by account value (SMB, MM, Ent.), territoriy (US West, US East, Europe, etc.) and stage of the sales process (I/O SDR, AE, Implementation, Customer Success).

Our marketing infrastructure invariably evolves – the tiny tools that we scraped together to get from 0-1 won’t support a 15-person team working solely on SEO, and your favorite free MAP isn’t robust enough to handle complex marketing campaigns and attribution. We add in new tools & methods, and we factor in new calculations like revenue attributions, sales velocity & buyer persona, often requiring transitioning to more robust platforms.

With that in mind, let’s talk about Lead Scores.

Lead Score (/lēd skôr/) noun.

A lead score is a quantification of the quality of a lead.

Companies at all stages use lead scoring, because a lead score’s fundamental purpose never changes. A lead score is a quantification of the quality of a lead. The fundamental output never really changes: the higher the score, the more quality the lead.

How we calculate a lead score evolves as a company hits various stages of development.

Stage 1: “Spam or Lamb?” – Ditch the Spam, and Hunt the Lamb.

Early on, your sales team is still learning how to sell, who to sell to, and what to sell. Sales books advize hiring “hunters” early on, who will thrive off of the challenge of wading through the unknowns to close a deal.

Any filtering based on who you think is a good fit may push out people you should be talking to. Marketing needs to provide hunters with tasty lambs they can go after, and you want them to waste as little time on Spam as possible (a hungry hunter is an angry hunter).

Your lead score is binary: it’s too early to tell a good lamb from a bad lamb, so marketing serves up as many lamps as possible to hunters so they can hunt efficiently. Stepping back from my belabored metaphor for a second, marketing needs to enable sales to follow-up quickly with fresh, high quality leads. You want to rule out missing out on deals because you were talking to bad leads first, so that you begin to build a track record of what your ideal customer profile looks like.

Lambs vs. Spam

Distinguishing between Lambs (good leads) & Spam (bad leads) will be be largely based on firmographic data about the individual and the company. A Lamb is going to be either a company with a budget or a title with a budget: the bigger the company (more employees), or the bigger the title (director, VP, CXO), the more budget they are going to have to spend.

Spam, meanwhile, will be visible by its lack of information, either because there is none or because it’s not worth sharing. At the individual level, a personal or student email will indicate Spam (hunters don’t have time for non-businesses), as will a vSMB (very small business). While your product may target vSMBs, they often use personal emails for work anyway (e.g: DavesPaintingCo@hotmail.com) and when they do pay, they just want to put in a credit card (not worth sales’ time).

Depending on size of the funnel and the product-market fit, this style of lead score should cover you up until your first 5 SDRs, your first 50 employees, until you pass 100 qualified leads per day or up until $1 Million ARR.

Stage 2: “If it looks like a Duck” – point-based scoring.

Those lambs you hunted for 12-18 months helped inform what type of leads you’re going after, and your lead score will now need to prioritize leads which most look like your ideal customer profile (ICP): I call this “if it looks like a Duck.”

Your Duck might look something like (A)Product Managers at (B)VC-backed, (C)US-based (D)software businesses with (E)50-200 employees. Here our duck has five properties:

  • (A) Persona = Product Managers
  • (B) Companies that have raised venture capital
  • (C) Companies based in the United States
  • (D) Companies that sell software
  • (E) Companies with 50-200 employees

Your lead score is going to be a weighted function of each of these variables. Is it critical they be venture-backed, or can you sell to self-funded software businesses with 75 employees as well? Is it a deal-breaker if they’re based in Canada or the U.K.?

Your lead score will end up looking something like this:

f(Duck) = An₁ + Bn₂ + Cn₃ + Dn +En

Here n₁…₅ are defined based on the importance of each attribute.

Lead’s that look 100% like your ICP will score the highest, while good & medium-scoring leads should get lower prioritization but still be routed to sales as long as they meet at least 1 of your critical attributes and 1-2 other attributes.

You can analyze how good your lead score was at predicting revenue on a quarterly basis by looking at false positives & false negatives.

This lead score model will last you for a while with minor tweaks and adjustments; however, one of a number of things will eventually happen that will make your model no longer effective:

Complex Sales Organization

A complex sales organization comes from having a non linear sales process (i.e: “they look like our ICP, so they should talk to sales”). Here are a few examples (although not exhaustive):

You may begin selling to different market segments with a tiered sales team: your point-based lead scoring system only works for one market segment, or you’ll have to adjust attributes to continually adapt as you tier your sales team, instead of adapting to their needs for increased sales velocity.

You may begin upselling at scale: a good lead for upsell is based not on their firmographic profile but on their behavioral profile: point-based behavioral attributes won’t work for new leads and the score is often a result of aggregate behavior across multiple users & accounts, too complex to map to a point-based lead score model (this is often called Marketing Qualified Accounts).

You may begin to find that a majority of your revenue is coming from outside your ICP, no matter how you weigh the various attributes. If you only accept leads that fit your ICP, you won’t hit your growth goals. Great leads are coming in that look nothing like what you expected but you’re still closing deals with him. Your ICP isn’t wrong, but your lead score model needs to change. We’ve written about this in depth here.

When that happens, you’ll need to move away from a manual management of linear model to a more sophisticated model, one that adapts to the complex habitat in which your company now operates and wins by being smarter.

Stage 3: “Be like a Kudu” – Adapt to your surroundings

As your go-to-market strategy pans out and you begin to take market share across multiple industries/geos/company sizes, the role of your lead score will stay the same: fundamentally, it should provide a quantitative evaluation of the quality of each lead.

Different type of leads are going to require different types of firmographic & behavioral data:

  • Existing customers: product usage data, account-level firmographic data
  • SMB (velocity) leads: account-level firmographic data.
  • Enterprise leads: individual-level firmographic data across multiple individuals, analyzed at the account level.

Your model should adapt to each situation, ingest multiple forms of data, and contextually understand whether a lead is a good fit for any of your products or sales teams. As your product evolves to accommodate more use cases, your lead scoring model needs to evolve regularly, ingesting in the most recent data and refreshing accordingly.

Predictive Lead Scoring

Predictive lead scoring adapts to the needs to growth stage b2b businesses because the model is designed to predict a lead’s likelihood to convert based on historical data, removing the need to manually qualify leads against your ICP.

Predictive Lead Scoring models are like Kudus: they are lightning fast (did you Kudus can run 70km/h?) and constantly adapt to the changing environment.

Kudus are active 24/7 (they never sleep), and their distinct coloration is the the result of evolving to adapt to their surroundings & predators.

The advantage of a predictive lead scoring model is that the end result remains simple – good vs. bad, 0 vs. 100 – regardless of how complex the inputs get – self-serve or enterprise, account-based scoring, etc.

Operationalizing a predictive lead scoring model can be time-intensive: ingesting new data sources as the rest of your company infrastructure evolves and takes on more tools with data your model needs, refreshing the model regularly, and maintaining marketing & sales alignment.

Making the switch to a predictive lead scoring model only truly makes sense when your sales organization has reached a level of complexity that requires it to sustain repeatable growth.

“Where should my business be?”

Now that we’ve looked at how lead scoring models evolve as your marketing & sales organization grows, let’s come back to our initial conversation about what type of model you need for your current business problems.

As businesses scale, some buy a tank when a bicycle will do, while others a trying to make a horse go as fast as a rocket. We’ve put together a quick benchmark to assess the state of your go-to-market strategy and where your lead scoring model should be at.

Some companies can stick with a Lamb lead scoring model up through 50 employees, while others need a predictive lead scoring model at 75 employees. While there are some clear limiting factors like sales organization complexity and plentiful historical data, understanding the core business problem you’re trying to solve (in order to scale revenue) will help guide reflection as well.

Why Lead Scores don’t reflect your Ideal Customer Profile

Marketing & sales alignment is fragile. Sales pushes back on leads whose scores diverge with their intuition: “why am I getting assigned to a lead based in India. We never close deals in India.” “Why is this lead scored low? We’re supposed to be going after accounts just like this.”

When sales pushes back on lead scoring, they lose confidence in the lead score. They stop using it to prioritize outreach, and don’t followup with good leads sent their way. Marketing feels frustrated about their work not being valued and they see increasing MQL disqualification and reduced conversion rates from MQL to Opportunity. Each side blames the other.

As we’ve discussed this problem with some of the best marketing ops leaders in the software industry, a common source of disconnect was a fundamental misunderstanding of the relationship between Lead Scores & Ideal Customer Profile (ICP). 

Time & time again, marketing & sales teams expect that leads who score the highest should be the ones that most look like their ICP, and that’s false. 

Few teams have explicitly discussed this, so let’s dive in.

Defining your ICP & Lead Score

You’ve done your persona research. You know everything about Grace the Growth Guru, Frank the Finance Freak or Sheila the Sales Sherpa (persona researchers love alliteration). You know exactly the type of customers you want to go after, so you build out your ICP – company size, geography, industry, revenue, integrations – as a function of the type of customer you want to go after. Great.

Your ICP will help guide you in your product roadmap – “What does Molly need?”, “How does this bring value to CompanyX?” – as well as your marketing & sales strategy.

Your ICP is the goal. It’s where you want to go. It can and should be informed by the past (data), but it is a representation of where you want to go, not where you are.

A Lead Score, meanwhile, is a quantifiable valuation of the quality of a lead. In early stage companies, it is often used to weed out spam and elevate big name VIPs to the top. As a company grows, the sales process complexities increase: tiered sales teams for self-serve vs. enterprise, geo-specific assignment, mixed inbound/outbound strategy & growth teams competing against both.

Any lead that looks identical to 100 leads that all turned into opportunities should be routed to sales and prioritized with a VIP treatment. Any lead that looks identical to 100 leads that stick to a free plan or have long sales cycles for low deal amounts should be ignored or prioritized as low importance.

When Lead Score & ICP differ in opinion and why.

“This lead is garbage”

Intuition is a powerful thing, and often serves sales people well as they build relationships with prospects in order to help them solve a core problem; however, sales people interact with <1% of all leads and their sense of lead quality is often based on a single qualitative data point. When a lead is scored high that “doesn’t look good,” it usually comes down to a single data point.

Last year we encountered a sales teams who wanted to override the score for leads based in India. They believed the market was not valuable to them, both in terms of available budget and operational costs. 10% of their new revenue in the previous quarter came from India. When they understood that, they asked that the country to be hidden from their sales team.

There’s a lot to unpack here. Of course it’s not good to have a sweeping bias about an entire country, especially when it is to the detriment to your sales goals. For this company, we’re also not saying that all leads from India should get prioritization. We’re saying that they should prioritize 100 good leads from India, just like 100 good leads from anywhere should.

Intuition is powerful, but Data doesn’t lie. Marketing has a responsibility not only to be data-driven, but to make the insights of that data available to all customer-facing teams. Modern marketing teams can enable modern sales teams not by providing them with 100 data points about every lead, but by providing a few key data points that explain why a lead gets scored the way it does.

At MadKudu we call that Signals and it looks like this:

The combination of relevant firmographic & behavioral data points let’s the sales team know why a lead scored a 92.

Inflection Points

Launching into new verticals & markets can present a real conundrum for your lead score. Historically, leads from, say, Japan, have not converted (because you weren’t targeting the Japanese market, weren’t compliant or a well-suited option), but this year you’re pushing into Japan and your upcoming campaign should bring in hundreds of new leads from Japan. You have expanded your ICP but your Lead Score is still measuring the likelihood of conversion based on historical data.

The same problem arises as companies go up market, selling to increasingly large businesses. The added complexity here is that enterprise sales fundamentally looks different than velocity sales, so even if you’ve closed some enterprise clients in the past, your lead score may be heavily skewed towards velocity sales, making it hard to surface enterprise leads. You’ve expanded your ICP to include a new breed of business.

Overcoming Predictive Bias & Training your Model

ICP & Lead Scores digress at inflection points. Fast-growing businesses need to increasing existing market share at the same time as the seek to expand into new markets. Among leading go-to-market teams, we’ve observed two trends that make this combination possible at scale.

Creating dedicated Business Development & Growth teams whose purpose is to bypass lead score and focusing on new market development areas, booking meetings directly for AEs. BDRs & Growth teams build up historical data over 3-6 months that can be used to retrain your lead score to account for the new market data.  

Another method is to create hard exclusions to override your lead score. If your lead score is operationalized across the entire buyer journey, this is a quick way to experiment with new markets in an automatic way, but this should be rare. Hard exclusions are like a blindfold for predictive lead scores – you’re removing one of many signals from the equation, increasing the likelihood of false positives. As you make strategic changes to your business, they may be necessary for overcoming predictive bias.

Actionable Definitions

It is vital to have a common understanding across marketing & sales around what these tools are, how they are made, and how they should be used. While your Ideal Customer Profile paints a picture of who your vision will serve in the next year, your lead score needs to be the best way for sales to prioritize outreach.

I’ve compiled a quick chart based on what we’ve seen from customers to illustrate the differences between ICP & Lead Score.  

This is something you can use to start a conversation at your next marketing & sales meeting about how Outbound Sales Strategy should be informed by your ICP or how you can increase forecast revenue for the next quarter based on the number of highly qualified leads you’re bringing in.

Building a Shadow Funnel

Marketing is becoming an engineer’s game. Marketing tools come with Zapier integrations, webhooks and APIs. Growth engineers finely tune their funnel, each new experiment – an ebook, a webinar, ad copy or a free tool – plugging into or improving upon the funnel.

Growth engineers fill their top of their funnel by targeting prospects who look like they are a good fit for their product, but haven’t engaged yet. Guillaume Cabane, VP Growth at Drift, has been sharing his experiments leveraging intent data for years. Intent data allow Guillaume to discern the intentions of potential buyers by providing key data points into what they are doing or thinking about doing.

A quick review of the three main categories of Intent Data

  • Behavioral Intent: This includes 1st party review sites like G2Crowd, Capterra & GetApp, as well as Bombora, which aggregates data from industry publications & analysts. They provide Drift with data about which companies are researching their industry, their competitors, or Drift directly. (e.g: “Liam from MadKudu viewed Drift’s G2Crowd Page”)
  • Technographics: Datanyze, HGData & DemandMatrix provide data about companies that are installing & uninstalling technologies, tools or vendors (e.g: “MadKudu uninstalled Drift 30 days ago:)
  • Firmographics: Clearbit, Zoominfo & DiscoverOrg offer data enrichment tools starting from a website domain or email, providing everything from headquarter location to employee count.

In a standard buyer journey, the right message and medium depends on where a prospect is in the funnel:

  • Awareness: do they know about the problem you solve?
  • Consideration: are they evaluating how to solve a problem?
  • Decision: are they evaluating whether to use you to solve their problem?

Drift began looking at whether we could help them determine the next best action for every prospect and account in their total addressable market (TAM). TAM can be calculated as the sum of all qualified prospects who have engaged with you (MQLs) + all qualified prospects who have not engaged with you.

TAM = MQLs + SMQLs

I’ll call the latter Shadow MQLs (SMQLs), more precisely defined as any prospect that is showing engagement in your industry or in one of your competitors, but not you.

Drift already leveraged MadKudu to determine when & how to engage with MQLs in their funnel, but they needed to automate the next best action for SMQLs. Should a sales person call them? Or should Drift send them a personalized gift through Sendoso?

Our strategy for determining the next best action involved mapping intent data to the standard buyer journey stages. By doing this, we could build what I call a Shadow Funnel.

For this experiment, we focused on four intent data providers:

  1. G2Crowd: a review site that helps buyers to find the perfect solution for their needs. They send Drift data about who is looking at their category (live chat) or Drift’s page.
  2. SEMRush: a tool that provides information about the paid marketing budget of accounts.
  3. Datanyze: this gives us information about what tech are being used on websites.
  4. (Clearbit) Reveal: tells us the accounts that are visiting our website.

In order to build our shadow funnel, we need to define Shadow stages of the buyer journey:

  • Awareness: understands the industry you operate in.
  • Consideration: looking at specific vendors (not you).
  • Decision: evaluating specific vendors (not you).

MadKudu’s role in this funnel is to determine whether the SMQL is showing High, Medium, or Low predicted conversion. Here is a table illustrating the data points we mapped to each stage & fit level:

By matching Datanyze & G2Crowd data, for example, Drift can identify accounts who have uninstalled one of Drift’s competitors in the past 30 days and have begun researching the competition. Without ever visiting a Drift property (which would, in turn, enter them into Drift’s real funnel), MadKudu predicts a high probability that this account is in the process of considering a new solution in their space.

With a traditional funnel, the goal is to fill it and optimize for conversion down-funnel. Awareness campaigns drive traffic, acquisition campaigns drive email capture, and conversion campaigns increase sales velocity & conversion.

The goal of the Shadow Funnel is the opposite. Drift wants the funnel to be empty and to have everyone who is in it churn out.

Rephrasing our previous TAM equation, we can state the following:

TAM = Funnel + Shadow Funnel

Anyone who is in your TAM that isn’t in your funnel is in your Shadow Funnel, and anyone who is in your TAM that isn’t in your Shadow Funnel is therefore in your Funnel.

The goal then becomes to move horizontally:

  • we want Shadow prospects to move from Shadow Aware (i.e: aware of the industry) to Aware (of you).
  • we want prospects at the Shadow Decision stage (i.e: deciding which tool to use, that isn’t yours) to move to the Decision phase (i.e: deciding whether or not to use you).
  • And so on.

Once you know where your target audience is in the buyer process, you can deliver targeting messaging to pull them from the Shadow Funnel into your funnel.

Next Steps: evaluating intent as predictive behavior.

For now, the Shadow Funnel is a proof of concept. Through this method, Drift identified 1,000+ new qualified accounts to engage with. Once we have some historical data ti play with, our next step will be to build a model to determine which intent data sources are best at predicting Shadow Funnel conversion. We’ll also want to look at which engagement methods show the most promise.

Can the same engagement tactics that work on the traditional funnel work on the Shadow Funnel? Does the thought leadership retargeting ad on LinkedIn have the same impact if an account has never engaged with you before? Does looking at a category on G2Crowd reliably predict whether you’re interested in considering our product?

We are excited to continue to explore this with Drift and other SaaS companies leveraging intent data to engage qualified prospects who need their product before prospects engage with them. This is a natural evolution of the B2C strategies that eCommerce & travel companies have been employing in previous years, but tailored towards helping companies looking for answers get those answers faster.

We’ll be talking more about this strategy with Drift & Segment on our upcoming webinar here.

Why SDRs are at odds with Lead Scores

I don’t think I’m giving away any trade secrets by revealing that SDRs aren’t always the biggest fans of lead scores. Whether implementing a lead score built internally or a solution like MadKudu, SDRs are in the precarious position of being the primary users and having very little influence over the score itself.

SDRs carry a lot of intuition of what makes a lead good or bad. They aren’t surprised when a lead they perceive as good/bad is rated as such. And yet, they are viscerally frustrated when a lead they perceive as ‘bad’ is rated otherwise, and vice versa. Even if the lead score is scoring leads perfect, an SDR’s core metric – the number of demo’s booked – is often undermined by the lead score.

Lead scores are meant to filter out bad leads while surfacing lead with the highest probability to convert to customer. A poor-performing lead score might surface leads that are likely to get a phone call, but not likely to convert. These care called NiNas (No Intent. No Authority), and they are like grease in your funnel – they look like they should go down smooth, and then they dry up halfway down the funnel, slowing down everything else that should pass through easily.

NiNa’s are great for an SDR’s quota, and while we know that NiNa’s aren’t good for the overall business, this means that a good lead score is removing one of the easiest ways for an SDR to make quota.

Lead Scores should serve SDRS

While not exactly a black box, Lead Scores have historically operated as such for SDRs. Their purpose is to help SDRs prioritize the highest-value leads, which should be great for helping them hit their quota; however, without knowing why a lead is good, lead scores provide little more than expectations for how the engagement should go.

At the same time, Lead Scores are calculated by measuring a lot of valuable information, most of which is not visible to the SDR. Beyond job title and employee count, lead scores evaluate the predicted revenue of each company, the size of specific teams, the tech stack & tools that a company uses, whether their solution is B2B or B2C, whether it has a free trial or not, whether they’ve raised venture capital, and much more. There can be thousands of signals that are weighed initially in order to figure out which ones are the best determiners of success, against which every lead will be measured.

MadKudu Signals sitting inside Salesforce
Sample MadKudu Signals sitting inside Salesforce

In the above example, we can see how valuable it is to know that the lead is performing 150K daily API calls, or that their company has multiple active users on the account , or that they are using Salesforce: these are indicators of the buyer persona, the use case, and therefore of the right message for the SDR to send.

For SDRs, these signals are context. Context for why a lead is a good lead, and that’s exactly how a Lead Score and serve an SDR. Constructing the right message, understanding where your lead is coming from, identifying what the tipping point is that made them sales-ready: SDRs and Lead Scores are trying to do the exact same thing.

With one customer, MadKudu was able to demonstrate a disproportionate ratio between Opportunities created and Opportunities won – another way to look at that is prospects that made it past an SDR vs. prospects that made it past an AE. What you can see above is that having ‘Manager’ or ‘Operations/HR’ in a prospects title negatively impacted their odds of getting through an SDR (or negatively impacted an SDR’s chances of getting them to an AE), while it greatly increased their chances of becoming a customer if they made it to an AE.

Knowing which kinds of titles are good for AE’s can help SDRs understand what to spend more time on, but it can also help SDR managers better train their SDRs on how to win with those personas.

Speaking with Francis on our weekly podcast, it was clear to me how important it is for SDRs to buy in to a Lead Score. If you’re in charge of implementing a lead score, you need to bring SDRs into the conversation early to understand how the lead score can serve them. Making a lead score actionable for SDRs means that your front line for feedback on how well your score is performing will be more incentivized to work with the score instead of working against the score.

The biggest source of friction in the customer journey is you

Ten years ago Amazon introduced same-day delivery, probably the single most important feature in cementing their dominance of the eCommerce industry. They did this after 10 years of innovating on the online shopper experience – recommended purchases, one-click payments, experiments on how website latency affected conversion rates – and they understood that the biggest source of friction in their buyer experience was waiting for your package to arrive.

We all have an idea in our head about what makes a great customer journey, a great buyer experience. When Francis asked me out of the blue, the first thing that came to mind was my experience buying an engagement ring last year, but I could just as easily point to the experience of creating a new Slack team. They are magical experiences. You never see what’s going on behind the curtain, and you never have any downtime to think about it. Is my package already in Paris? How did Amazon know what I was going to order? Doesn’t matter. It’s already arrived before I can begin to comprehend how they possibly do that at scale.

For SaaS companies today, increasing revenue is often about removing friction. The product team designs and improves features so that customers don’t have time to wonder whether the competition is building a better product. Customer Success is looking at customer health metrics to identify customers before they even think about churning and improve their results.

Marketing & Sales have a plethora of data & engagement tools so that they know everything about who their engaging with from Clearbit-enhanced Drift Bots to segmented Outreach campaigns encouraging prospects to signup for Webinars or jump on a call.

You are the friction.

"You start building this vision of what you want the customer journey to be, but you don't realize how far removed you are from your customer."

So why is it that 90% of SaaS companies take more than five minutes to follow up on a request to schedule a demo? Francis suggests going through your own customer journey – ideally by signing up with a friend’s email account, especially if your friend is a great fit for your product – to get the full experience. If it’s not the ~48 hours of follow-up time that’ll make you feel the friction, it’s the ~5 days between the demo request and the phone call that’ll make you rethink your process.

What makes it take so long?

  • Lead data enhancement
  • Territories/routing rules
  • SDR first-response latency
  • Email back-and-forth to validate interest and find a time to talk.

It’s easy to understand each one of those steps – after all, everything above (accept maybe the emails) feel very logical – the only thing that’s missing in the equation is the customer experience. SaaS companies are eager to over-optimize for the sake of being fair, applying rigorous rules to lead assignment, and this often flies in the face of the customer journey.

One of MadKudu’s most popular features, the Fastlane – an enhancement to signup forms that allows highly-qualified leads to skip the form and go straight to a sales rep’s calendar – is often difficult to implement initially because lead routing takes minutes. The customer eats the friction because of operational friction.

Remove friction. Prioritize customers.

It’s easy to remove friction from the customer journey if you prioritize it. Calendly, for example, offers a great Team Scheduling feature that allows prospects to see an aggregate calendar for every potential representative and then choose a time that works for them, instead of displaying the representative’s calendar after they’ve been round-robined with less available time slots. This puts the customer in the priority seat and sacrifices the possibility that Rep’s who have less immediate availability in their calendar might get routed less leads. In fact, that’s not a bad forcing function for making sure SDRs are prioritizing their time correctly.

 

 

 

How we use Zapier to score Mailchimp subscribers

There’s no better way to get your story out there than to create engaging content with which your target audience identifies. At MadKudu, we love sharing data-driven insights and learnings from our experience working with Marketing Operations professionals, which has allowed us to take the value we strive to bring our customers every day and make it available to the marketing ops community as a whole.

As interest in our content has grown, it was only natural that we leverage Zapier in order to quickly understand who was signing up and whether we should take the relationship to the next level.

Zapier is a great way for SaaS companies like us to quickly build automated workflows around the tools we already use to make sure our customers have a frictionless relevant journey. We don’t want to push every Mailchimp subscriber to Salesforce, because not only would that create a heap of contacts that aren’t sales-ready, but we may end up inadvertently reaching out to contacts who don’t need MadKudu yet, giving them a negative first impression of us as a potential customer.

Today we are able to see who is signing up for our newsletter that sales should be paying attention to, and let’s see how:

Step 1: Scoring new newsletter subscribers

The first step is to make sure you grab all new subscribers. Zapier makes that super easy with their Mailchimp integration

Next we want to send those new subscribers to MadKudu to be analyzed. While MadKudu customers have a dedicated MadKudu integration, Zapier users who aren’t a MadKudu customer can also leverage Zapier’s native Lead Score app, which is (you guessed it) powered by MadKudu.

Step 2: Filter by Lead Score

We’ve got our MadKudu score already configured so after I feed my new subscriber to MadKudu, I’m going to run a quick filter to make sure we only do something if the Lead Score is “good” or “very good.”

If you’re worried that the bar will filter out potentially interesting leads, consider this a confidence test of your lead score.

Zapier Filtering by Lead Score Quality

Step 3: Take Action, Communicate!

For MailChimp signups that pass our Lead Score filter, we next leverage the SalesForce integration in Zapier to either find the existing contact inside Salesforce (they may already be there) or create a new lead. SalesForce has made this very easy to do with the “Find or Create Lead” action in Zapier.

Once we’ve communicated synced our Mailchimp lead to Salesforce, we use the Slack integration on Zapier to communicate everything we’ve created so far to a dedicated #notif-madkudu channel, which broadcasts all the quality leads coming from all of our lead generation channels.

Directly inside Slack, our team can get actionable insights:

  • The MadKudu score, represented as 3 Stars (normal stars for Good/ twinkling for Very Good)
  • The signals that MadKudu identified in this lead, both positive and negative
  • A link to the lead in Salesforce, for anyone who wants to take action/review

Actionable Lead Scoring applied to your Newsletter

Our goal here isn’t to reach out to newsletter subscribers – we want to build a long-term relationship with them, and we’re happy to keep delivering them quality content until their ready to talk about actionable lead scoring. What we’re able to do is see qualitatively & quantitatively the number of newsletter subscribers we have who are a good fit for MadKudu today.

This helps marketing & sales stay aligned on the same goal. Marketing is measuring newsletter growth with the same metric its using to measure SQL generation.

Segmenting Funnel Analysis by Customer Fit

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations – like applying lead scoring to funnel analysis. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

A lead score is the foundation for your marketing & sales alignment. It creates accountability for both teams and is the foundation of a strong Sales SLA. A foundation is only as useful as what you build on top of it, and that’s why we talk about Actionable Lead Scoring – leveraging your lead score to create a frictionless journey. Today we’re going to focus on how you can leverage your lead score in funnel analysis to see where your best leads are falling off.

Funnel Analysis & Actionable Intelligence

Understanding the customer journey’s inflection points and conversion rates is essential to scaling & maintaining success as a software company; however, the analysis you’re doing is just as important as the data you’re using to generate that analysis.

The goal of funnel analysis is to look at ways to remove friction from the customer journey, to improve activation & conversion, and to make sure that the users who should engage most with your product do. Accomplishing that goal without segmenting by lead score is like turning every lead into an opportunity in sales force and then trying to improve your deal won rate. You need to start with the right metric by answering the right question: what are my best leads doing and how can I make their journey better?

If you're not applying lead score to funnel analysis, you're making decisions based on flawed data.

Applying Lead Score to Funnel Analysis

Let’s imagine you want to look at the first 15 days of user activity in your self-service product, which corresponds to your 14-day free trial and immediate conversion. Of course, you already know that 50% of conversion on freemium occurs after the trial expires, but you’re looking to identify engagement drop-off before the trial even expires. After all, customers can’t convert if they don’t stay active.

A simple cohort analysis of all users who signed up over a two-week period would show that over 60% are dropping off in the first 24 hours, a smaller chunk 5 days out, and another group at the end of trial. You might conclude that you need to rework your onboarding drip campaign’s first emails in order to combat that big next-day dropoff. That would make sense, except are the people who are dropping off the prospects that matter most? Probably not.

Very good leads have a different funnel than very bad leads

One MadKudu came to this exact same conclusion, and despite various drip campaign tests, they didn’t see that 60% drop-off move. Then we segmented their  funnel analysis, looking at how very good, good, bad & very bad leads acted, and we found that most of that 60% drop-off was very bad leads: they had made their sign-up process so frictionless that they were getting spam sign-ups who were never going to actually use their product. As it turned out, that small dip after 5 days corresponding to the biggest area of drop-off for very good leads, who were dropping off at the end of their intense drip campaign which only lasted 5 days.

In this case, not segmenting by customer fit completely masked where their focus should be, and they spent time trying to get spam signups to stay engaged with their product instead of looking at how their highest value prospects were engaging with their product.

Our recommended Setup

If you’re looking to start segmenting funnel analysis by Customer Fit, our recommended MarTech stack is to feed MadKudu into product analytics solution Amplitude using Segment‘s customer data platform.

Account-Based Engagement and the Fallacy of Job Titles

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations, such as Account-Based Engagement. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

As we saw recently with the Sales SLA, the path to alignment often starts & ends with clear definitions of metrics. The leads marketing hands to sales need to have the same definition & measurement for success, which is where actionable lead scoring plays a key role in establishing lasting alignment.

If we step back from Sales & Marketing and look at aligning each department to business objectives, we can see that metric disjunction can result in each individual team being successful while ultimately failing to create a relevant customer journey at scale.

The fallacy of job titles

One area where we often observe this is when we run funnel analysis by customer fit and look at job titles as predictors of activation and conversion. On self-serve tools such as API-based products, we often see that someone with a developer title is more likely to activate but very unlikely to convert (that is, to hand over the credit card), whereas someone with a CEO/owner title is more likely to give a credit card, but less likely to convert.

One analysis we recently ran for a customer demonstrated that perfect:

How job title affects conversion | Account-Based Engagement

  • Developers convert 60% less than the average user
  • Founders, CEOs & marketing convert 70-80% than the average user.

When we look at conversion & activation side-by-side for this same customer, the number speak for themselves:

Conversion vs. Activation | Account-Based Engagement

  • Founders/CEOs don’t use the software that much but end up converting higly
  • Product & Project managers have a higher activation but lower conversion rate

Product teams are historically motivated by increasing activation by building an increasingly engaging product; however, a developer is unlikely to respond to marketing’s nurturing emails or jump on a first sales call no matter how active they are on the product.

Likewise with more sales-driven products like enterprise software, SDRs are often singularly focused on the number of meetings they can generate for their AEs; however, low-level team members are significantly more likely to jump on a phone call and significantly less likely to convert as compared to their director counterpart.

In both of these instances, we see that product & sales development are able to optimize for their metric without accomplishing the core business objective of creating a great customer journey.

How Account-Based Engagement changes the rules

What this comes back to is account-based engagement, a nascent terminology in the marketing space stemming from the principal of account-based marketing but extending it across the entire customer journey and to all customer-facing teams. Where account-based marketing encourages running campaigns to generate interest not at the individual lead level but the account level – especially important when you have multiple stakeholders in the decision-making process – account-based engagement extends that to all teams, meaning that:

  • Product teams should seek not only to make as many active users as possible, but to create active accounts: building features that encourage getting other stakeholders involved or making it easy for your hero to evangelize your product value to other stakeholders.
  • Marketing teams should not seek to generate marketing qualified leads but marketing qualified accounts, including nurturing existing accounts in order to get other stakeholders involved so as to set sales up for success
  • SDRs should not seek to generate meetings at the account level, not at the lead level, and shouldn’t be working on accounts where the necessary stakeholders are not already involved.

Account-Based Engagement | Identifying hidden opportunities

We’ve been recently working with two of our bigger customers who have a prosumer user base to identify marketing-qualified accounts that aren’t getting attention. We do this by looking not only at customer-fit at the account level – does the account look like the type of accounts that typically convert when sales engages – but also at behavioral-fit: are they engaging with the product the way paying customers typically do?

Sales reps who are qualifying leads as soon as the account is created aren’t going to be able to sift through the hundreds of warm accounts to identify which accounts have engaged properly (and been properly engaged) to be sales-ready; however, this is core to Account-Based Engagement. Just as our Sales SLA gives a common metric for marketing & sales to work towards, so Product, Customer Success, Sales & Marketing all need to have a common qualification criteria for an account in order to be aligned on how best to achieve business goals.

Remember: In B2B, you’re not selling to users, you’re selling to Accounts

The goal is not to reduce all teams to a single metric like revenue-generated, but rather to help reduce the natural tendency to game a metric by linking a common thread between the metrics that we use to measure success. That thread is Accounts.

It is all too easy to lose track of the fact that selling B2B software means that a company is going to buy your software, not a person. There are users, decision-makers, stakeholders and other advisors in the buying process, but at the end of the day a company is going to make a decision about whether to pay another company for their solutions. In this respect, every team should be focused on how to acquire, activate, convert & retain accounts, because at the end of the day it is not a user that will churn but an account.

 

 

Sales SLA: how accountability fosters sales & marketing alignment

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

I first heard about a Sales SLA in my first week after joining MadKudu. I was familiar with a Service Level Agreement (SLA) – a commitment from the engineering team around reliability with varying repercussions if we violated the SLA – but I had never interacted with a Sales SLA, despite being in marketing.

When one of our customers is having trouble with hitting revenue goals, the Sales SLA is almost always where we start, so let’s start there.

Sales SLA: A contract between Sales & Marketing

A Sales SLA is an agreement between marketing & sales whereby:

Marketing commits to generate N Very Qualified Leads per quarter, and

Sales commits to reach out to 99% of those leads within H hours, and to contact them at least N times in the first D days

Most marketing teams have a quarterly lead generation goal. A Sales SLA doesn’t measure MQLs or SQLs – it measures Very Qualified Leads: MQLs with the potential to become customers. Marketing agrees to create enough expected revenue, and Sales agrees to convert it into the revenue target.

“The only people who create value out of nothing is Marketing. The role of sales is to keep the value of those leads constant until they close.”

Marketing not only needs to generate increasing amounts of value but to be able to measure its potential to become revenue.

Sales needs to reach out quickly and to continue to connect with that lead enough to feel like everything possible was tried. A typical adage is “8 times in 15 days”, but again, this varies for each customer journey.

Each variable of a Sales SLA comes with its own questions: what makes a lead very qualified? How many touch points and how quickly should a lead be reached out to? Should it vary based on lead source?

“Do we need a Lead Score?”

The Sales SLA requires scoring each lead as they sign up. Many early-stage SaaS companies wonder how they are supposed to have a Sales SLA from day one without having a lead score.

Let’s put it out there: everyone has a lead score.

Filtering spam at signup is scoring. Escalating fortune 100 companies at sign up is scoring. While simple, it allows you to begin defining lead quality by answering “who do you want to ignore and who do you want to talk to?”

Since everyone has a lead score, everyone therefore should have a Sales SLA. The earliest iteration can be simple: “If someone signs up through a demo form, you need to follow up faster than if they sign up for a trial.” Putting something simple in place is better than nothing at all.

Implementing a Sales SLA

The tactical owner of a Sales SLA will almost always be Sales Operations, because they are ultimately the ones managing SDR workflows today. Marketing tends to ask for a Sales SLA. It ends the cycle of sales bemoaning lead quality and marketing bemoaning sales conversion rates. The Sales SLA will move that existential, emotional debate to a practical, data-driven report.

In order for you to be able to maintain a Sales SLA properly, you’ll need to be able to track all outbound communication inside your CRM. If you’re using third party emailing tools, every email you send out needs to be tied to a lead as an activity. Otherwise you’ll get false positives or adjust your sales SLA based on current activity metrics.

Contract & Education

A Sales SLA doesn’t have to be written; however, in practice, a written agreement can be useful for onboarding new SDRs. Every new SDR should know what their team expects of them from day one. And every SDR should know what happens if they don’t respect it.

When the Sales SLA is broken, some organizations choose to put leads back into round robin. Others send it to a marketing nurturing funnel, or escalate it to a manager. How you implement the Sales SLA is up to you, as long as you’re tracking the metrics necessary to uphold it.

Once your Sales SLA is in place, much like a infrastructure monitoring tool, you should be able to detect outlier scenarios more quickly. SDRs may be on vacation or no longer with the company and still get leads routed to them. Certain campaign leads may get bulk routed to an old admin account. Or new team members may get routed leads before they’ve learned about the Sales SLA. None of these problems are anyone’s “fault,” but they need to be noticed & dealt with quickly.

Procrastination in Hyper-Growth

Sales SLAs can look daunting on paper, especially if you’re still in the early days of building your sales organization. At it’s core, a Sales SLA defines the handoff between marketing & sales. At MadKudu, for example, the handoff happens at signup today. Sales handles everything after lead generation, because we don’t yet have a need to automate that part of the funnel. We have a number of indicators (company size, technologies used, etc.) that we know correspond closely to someone needing MadKudu. This allows us to be pretty explicit about what makes a lead Very Qualified.

“People don’t put SLAs in place because they want to avoid having tough conversations”

Creating a Sales SLA is going to shed a spotlight on all the cracks in your sales funnel, especially when you’ve been in dealing with hyper-growth recruiting. If leads aren’t getting followed up on, you’re going to have to look at what the cause is. Are you understaffed? Are you not scoring/routing/prioritizing properly? Or are your Sales reps not reacting quickly enough?

When a Sales SLA is breached, it’s a symptom of a bigger problem, and usually no single person is at fault. Without a Sales SLA, it’s easy to overlook one of your sales reps not following up, or low-quality leads getting faster follow-up than high-quality leads. 

Start the discussion around Sales SLAs early and you’ll address problems that won’t go away unless you shed light on them.