How we use data and machine learning to solve the lead quality problem

This post originally appeared on Clearbit’s Blog.

When Simon Whittick joined Geckoboard as its first VP of Marketing, he took all the standard steps to attract more visitors to their site, convert them, and grow the SaaS company’s revenue. He and his team wrote content for their popular blog, ran paid advertising campaigns, and set up email nurture campaigns. At the end of his first year, he was as successful as almost any other marketing executive in the industry. The site was attracting hundreds of thousands of visitors every month, and the business was booking millions in annual recurring revenue. But unknowingly, his success was driving one of his coworkers crazy.

While 10,000 leads a month earned Whittick applause at the company’s weekly all-hands meeting, it was keeping Geckoboard’s only sales development rep (SDR), Alex Bates, at the office on nights and weekends. Many of the inbound leads were self-serve customers who required no conversation with sales, or tire kickers who were not ready to buy. This left Alex manually qualifying leads and wasting tons of his time.

As a result, Geckoboard’s sales efficiency—one of the most critical metrics for any company—was slumping. In other words, Whittick wasn’t only driving a junior sales rep crazy; he was leaving money on the table.

Over the course of the next year, Whittick built a data-backed machine learning process to solve his company’s lead-qualification problems. In the process, he turned Bates into not only an adoring fan of his, but a one-man sales team as efficient as a typical ten-person SDR team. Without any technical background, Whittick figured out a way to change the shape of his company using data and a bit of machine learning.

One day toward the end of last year, Bates and Whittick sat down to discuss how they could solve their lead-quality problem. They had close to 10,000 leads coming in each month, but they needed to figure out which of those leads to send to sales. Their first instinct was to refine their ideal customer profile. They’d both read all the sales and marketing blogs preaching its importance. They started with a Ideal Customer Profile based on some simple audience rules.

On paper, Geckoboard’s ideal customer was a software company with more than 100 employees; they typically sold to a director or VP. But the truth was that a lot of companies outside that explicit profile would be great customers. For example, their initial model excluded a company with 95 employees even if it looked almost identical to one of their best customers. When they looked at their past data, they learned that leads in what they believed to be their ideal customer profile converted at twice the rate. But they only accounted for 0.7% of the conversions. They needed a more nuanced and flexible inbound model.

basic-ideal-customer-lead-qualification-results

Prior to joining the Geckoboard team, Whittick had worked for Marin Software. While he was there, he began to notice a shift in the way companies approached marketing. The most forward-thinking companies had begun to hire technical employees with degrees in mathematics instead of business. He heard stories of companies that were replacing entire teams and doubling revenue by using publicly (or privately) available information and crunching it to their advantage. As time went on, many of those employees left their jobs to provide the same level of automation to smaller companies without the budget to hire data scientists.

Between his time at Marin Software and Geckoboard, dozens of startups popped up to help larger companies embrace the data revolution. Companies like Clearbit mined the web for demographic and firmographic data that could be used to better filter leads. My own company, MadKudu, makes it possible to pull insights from that data without having a PhD in computer science. By 2016, the entire marketing technology landscape had shifted. With an executive team that embraced innovation and big bets, Whittick decided to make it Geckoboard’s competitive advantage.

The first step Whittick took was to develop his own flexible point-based scoring system. Previously a lead was either given a 1 or a 0. A lead was either a software company with 100 or more employees or it wasn’t. It was binary. The obvious problem of this model was that a company with 95 employees would be excluded. In addition, a company with 100 employees was given the same score as a company with 1,000 employees, even though the latter was twice as valuable.

In his new model, Whittick gave leads a score based on multiple criteria. For example, he’d give a lead 50 points for having 500 employees or more, and negative 2 points if it had less than 10 employees. A director-level job title would receive 10 points, whereas a manager would only receive 3. This created an exponentially larger score range, which meant that Bates could prioritize leads. If he called the top score leads, he’d have the option to call B-tier leads. The model was weighted toward the large accounts Geckoboard knew could significantly impact revenue. For example, a US-based real estate company with 500 employees and a director buyer would be routed to the top of Alex’s lead list, even though it didn’t fit the software industry criteria.

advanced-point-based-lead-scoring@1x

This new model was similar to the way SDRs have scored leads for over a decade, only more efficient. Prior to automated lead scoring, sales reps were told by their managers to prioritize leads based on four criteria: budget, authority, need, and timing (or as it’s commonly referred to, BANT). This method is more flexible than a rigid ideal customer profile, but it is only as strong as the rep behind it. Human error, irrational judgment, and varying levels of experience lead to a process with little rhyme or reason. That’s why Whittick chose to automate the task and take humans out of the process entirely.

RESULTS-advanced-point-based-lead-scoring@1x

Immediately the company began to see results from their lead-scoring investment. Within the first month, leads were converting at twice the rate. As a result, Bates was spending less time to close more deals. Sales efficiency—revenue collected divided by the time and resources to earn it—rose significantly. Still, Whittick knew he could improve the results and save Bates even more time.

One of the biggest shifts that Whittick saw in the technology industry was the speed at which data could be put to use as a result of new tools. In the old world that he inhabited, a lead couldn’t be scored until it hit a company’s CRM. Enrichment software took hours to append valuable data to a lead.

That information could be sent to the CRM and the lead scored accordingly before the visitor began typing in the next text box.

After his first lead scoring success, Whittick decided to make another bet. Bates frequently complained about leads that were obviously bad fits—the type of conversation that takes 30 seconds to know there isn’t a mutual fit. Many of the companies were too small to need sophisticated dashboards yet. Whittick enlisted one of the company’s front-end developers to help him solve the problem. They built logic into the product demo request page that would ask for a visitor’s email address and then, before sending them to the next page, score the lead. On the back end, additional information would be appended to the lead using Clearbit, and it would be run through MadKudu’s scoring algorithm. If it received a high-enough score, the next page would ask for the lead’s phone number and tell them to expect a call shortly; if the score was low, they’d be routed through a low-touch email cadence. It was radically successful.

madkudu form demo geckoboard

Before implementing their real-time lead scoring solution, only about 15% of Bates’ conversations were meaningful. The new website logic meant that he could cut 85% of the calls he took every day and focus on higher quality accounts. Once again, sales efficiency increased significantly.

In addition to the speed at which information could be appended, processed, and acted on, Whittick saw another change in the marketing technology world: there was suddenly more data than most companies knew what to do with. Marketers could know what CRM, email server, and chat service a company used. They could know when a company was hiring a new employee, when they were written about by a major news outlet, and how much money they’d raised. It was overwhelming. But thanks to tools like Segment, marketers could pipe all that data into a CRM or marketing automation system and act on it. Then they could combine it with information like how frequently someone visited their own site, how often they emailed sales or support, and when they went through a specific part of the onboarding process. For a data-driven marketer like Whittick, this new world was utopia.

In conversations with Bates, Whittick learned that the best leads were ones that went through their onboarding process pre-sales conversation. During the Geckoboard free trial, users were prompted to build their first dashboard, connect a data source, and upload their company’s logo and color palette. As is the case with many SaaS solutions, most users dropped off before completing all the steps. Those users weren’t ready for a conversation with sales. But when Bates was looking at his lead list, he had no way of knowing whether or not a free trial user had completed onboarding. As a result, he was spending at least half of his time with people that weren’t ready to talk or buy.

Combining usage data from the website and their app, Whittick set out to refine the lead scoring model even further. Each time a free-trial user completed a step in the onboarding process, it was recorded and sent back to the CRM using Segment. The model would then give that lead a couple of additional points. If the user completed all of the steps, bonus points would be added and the lead would be raised to the top of Bates’ lead queue in Salesforce. Again, Bates began spending less time talking to customers prematurely and more time having high-quality conversations that led to revenue. Whittick had figured out how to save the sales team time and increase sales efficiency further.

behaviorial-plus-advanced-points-based

But while Whittick and Bates were celebrating their improved conversion rate success, a new problem was emerging. By the summer of 2016, they had enlisted my team at MadKudu to automate their lead scoring. Rather than manually analyzing conversion data and adjusting their lead scoring model accordingly, our machine learning tool was built to do all the work for them. There was a small problem. Today, machine learning algorithms are only as strong as the humans instructing them. In other words, they are incredibly efficient at analyzing huge sets of data and optimizing toward an end result, but a human is responsible for setting that end result. Early on, Whittick set up the model so that it would optimize for the shortest possible sales cycle and the highest account value. He didn’t, however, instruct it to account for churn, an essential metric for any SaaS company. As a result, the model was sending Bates leads that closed quickly, but dropped the service fast too. Fortunately, the solution was simple.

After learning about the problems with his model, Whittick instructed MadKudu’s algorithm to analyze customers by lifetime value (LTV) and adjust the model to optimize for that. He also instructed it to analyze the accounts that churned quickly and score leads that looked like this negatively.

Example: For Geckoboard, Digital Agencies were very likely to convert and the old scoring algorithm scored them highly. However, agencies had a 5X chance of churning after 3 months when the project they were working on ended.

At this point, the leads being sent to Bates were significantly better in aggregate than the leads he had previously been receiving. However, there were still false positives that would throw him off. While the overall stats on scored leads were looking great, the mistakes the model made hurt sales and marketing trust and were hard to accept. To combat this and make the qualification model close to perfect, Whittick had Bates start flagging any highly scored leads that made it through.

Through this process, they found that many of the bad leads that made it through were students (student@devbootcamp.com), fake signups (steve@apple.com), or more traditional companies that did not have the technology profile of a company who would likely use Geckoboard (tractors@acmefarmequipment.com). Whittick was then able to add specific, derived features to their scoring system to effectively filter these leads out and yet again improve the leads making it to Bates.

At this point, Geckoboard can predict 80% of their conversions from just 12% of their signups. By increasing sales efficiency with machine learning, Whittick found a way to enable Bates to do the work an average sales team of five could typically handle.

From self-driving trucks to food delivery robots, this is the story of twenty-first-century business. Companies like Geckoboard are employing fewer people and creating more economic value than enterprises ten times their size. Leaders like Whittick are center stage in this revolutionary tale, figuring out how to optimize sales efficiency or conversion rates or any other metric given to them, just like the artificial intelligence they now employ. But of course, this has been happening over many years, even decades. The difference—and this cannot be overstated—is that Whittick doesn’t have a PhD in applied math or computer science. The technology available to marketers today enables companies to generate twice the revenue with half the people.

The Three Stages of Lead Scoring: Lambs, Ducks & Kudus

In the past year we talked with hundreds of SaaS companies about their marketing technology infrastructure. While we’re always excited when a great company comes to us looking to leverage MadKudu in their customer journey, we’ve noticed two very common scenarios when it comes to marketing technology. We have seen (1) companies looking to put complicated systems in place too early, and (2) companies that are very advanced in their sales organization that are impeding their own growth with a basic/limiting MarTech stack.

I want to spend some time diving into how we see lead scoring evolving as companies scale. It is only natural that the infrastructure that helped you get from 0-1 be reimagined to go from 1-100, and again from 100-1000 – and so on.

Scaling up technical infrastructure is more than spinning up new machines – it’s sharding and changing the way you process data, distributing duplicate data across multiple machines to assure worldwide performance.

Likewise scaling up sales infrastructure is more than hiring more SDRs – it’s segmenting your sales team by account value (SMB, MM, Ent.), territoriy (US West, US East, Europe, etc.) and stage of the sales process (I/O SDR, AE, Implementation, Customer Success).

Our marketing infrastructure invariably evolves – the tiny tools that we scraped together to get from 0-1 won’t support a 15-person team working solely on SEO, and your favorite free MAP isn’t robust enough to handle complex marketing campaigns and attribution. We add in new tools & methods, and we factor in new calculations like revenue attributions, sales velocity & buyer persona, often requiring transitioning to more robust platforms.

With that in mind, let’s talk about Lead Scores.

Lead Score (/lēd skôr/) noun.

A lead score is a quantification of the quality of a lead.

Companies at all stages use lead scoring, because a lead score’s fundamental purpose never changes. A lead score is a quantification of the quality of a lead. The fundamental output never really changes: the higher the score, the more quality the lead.

How we calculate a lead score evolves as a company hits various stages of development.

Stage 1: “Spam or Lamb?” – Ditch the Spam, and Hunt the Lamb.

Early on, your sales team is still learning how to sell, who to sell to, and what to sell. Sales books advize hiring “hunters” early on, who will thrive off of the challenge of wading through the unknowns to close a deal.

Any filtering based on who you think is a good fit may push out people you should be talking to. Marketing needs to provide hunters with tasty lambs they can go after, and you want them to waste as little time on Spam as possible (a hungry hunter is an angry hunter).

Your lead score is binary: it’s too early to tell a good lamb from a bad lamb, so marketing serves up as many lamps as possible to hunters so they can hunt efficiently. Stepping back from my belabored metaphor for a second, marketing needs to enable sales to follow-up quickly with fresh, high quality leads. You want to rule out missing out on deals because you were talking to bad leads first, so that you begin to build a track record of what your ideal customer profile looks like.

Lambs vs. Spam

Distinguishing between Lambs (good leads) & Spam (bad leads) will be be largely based on firmographic data about the individual and the company. A Lamb is going to be either a company with a budget or a title with a budget: the bigger the company (more employees), or the bigger the title (director, VP, CXO), the more budget they are going to have to spend.

Spam, meanwhile, will be visible by its lack of information, either because there is none or because it’s not worth sharing. At the individual level, a personal or student email will indicate Spam (hunters don’t have time for non-businesses), as will a vSMB (very small business). While your product may target vSMBs, they often use personal emails for work anyway (e.g: DavesPaintingCo@hotmail.com) and when they do pay, they just want to put in a credit card (not worth sales’ time).

Depending on size of the funnel and the product-market fit, this style of lead score should cover you up until your first 5 SDRs, your first 50 employees, until you pass 100 qualified leads per day or up until $1 Million ARR.

Stage 2: “If it looks like a Duck” – point-based scoring.

Those lambs you hunted for 12-18 months helped inform what type of leads you’re going after, and your lead score will now need to prioritize leads which most look like your ideal customer profile (ICP): I call this “if it looks like a Duck.”

Your Duck might look something like (A)Product Managers at (B)VC-backed, (C)US-based (D)software businesses with (E)50-200 employees. Here our duck has five properties:

  • (A) Persona = Product Managers
  • (B) Companies that have raised venture capital
  • (C) Companies based in the United States
  • (D) Companies that sell software
  • (E) Companies with 50-200 employees

Your lead score is going to be a weighted function of each of these variables. Is it critical they be venture-backed, or can you sell to self-funded software businesses with 75 employees as well? Is it a deal-breaker if they’re based in Canada or the U.K.?

Your lead score will end up looking something like this:

f(Duck) = An₁ + Bn₂ + Cn₃ + Dn +En

Here n₁…₅ are defined based on the importance of each attribute.

Lead’s that look 100% like your ICP will score the highest, while good & medium-scoring leads should get lower prioritization but still be routed to sales as long as they meet at least 1 of your critical attributes and 1-2 other attributes.

You can analyze how good your lead score was at predicting revenue on a quarterly basis by looking at false positives & false negatives.

This lead score model will last you for a while with minor tweaks and adjustments; however, one of a number of things will eventually happen that will make your model no longer effective:

Complex Sales Organization

A complex sales organization comes from having a non linear sales process (i.e: “they look like our ICP, so they should talk to sales”). Here are a few examples (although not exhaustive):

You may begin selling to different market segments with a tiered sales team: your point-based lead scoring system only works for one market segment, or you’ll have to adjust attributes to continually adapt as you tier your sales team, instead of adapting to their needs for increased sales velocity.

You may begin upselling at scale: a good lead for upsell is based not on their firmographic profile but on their behavioral profile: point-based behavioral attributes won’t work for new leads and the score is often a result of aggregate behavior across multiple users & accounts, too complex to map to a point-based lead score model (this is often called Marketing Qualified Accounts).

You may begin to find that a majority of your revenue is coming from outside your ICP, no matter how you weigh the various attributes. If you only accept leads that fit your ICP, you won’t hit your growth goals. Great leads are coming in that look nothing like what you expected but you’re still closing deals with him. Your ICP isn’t wrong, but your lead score model needs to change. We’ve written about this in depth here.

When that happens, you’ll need to move away from a manual management of linear model to a more sophisticated model, one that adapts to the complex habitat in which your company now operates and wins by being smarter.

Stage 3: “Be like a Kudu” – Adapt to your surroundings

As your go-to-market strategy pans out and you begin to take market share across multiple industries/geos/company sizes, the role of your lead score will stay the same: fundamentally, it should provide a quantitative evaluation of the quality of each lead.

Different type of leads are going to require different types of firmographic & behavioral data:

  • Existing customers: product usage data, account-level firmographic data
  • SMB (velocity) leads: account-level firmographic data.
  • Enterprise leads: individual-level firmographic data across multiple individuals, analyzed at the account level.

Your model should adapt to each situation, ingest multiple forms of data, and contextually understand whether a lead is a good fit for any of your products or sales teams. As your product evolves to accommodate more use cases, your lead scoring model needs to evolve regularly, ingesting in the most recent data and refreshing accordingly.

Predictive Lead Scoring

Predictive lead scoring adapts to the needs to growth stage b2b businesses because the model is designed to predict a lead’s likelihood to convert based on historical data, removing the need to manually qualify leads against your ICP.

Predictive Lead Scoring models are like Kudus: they are lightning fast (did you Kudus can run 70km/h?) and constantly adapt to the changing environment.

Kudus are active 24/7 (they never sleep), and their distinct coloration is the the result of evolving to adapt to their surroundings & predators.

The advantage of a predictive lead scoring model is that the end result remains simple – good vs. bad, 0 vs. 100 – regardless of how complex the inputs get – self-serve or enterprise, account-based scoring, etc.

Operationalizing a predictive lead scoring model can be time-intensive: ingesting new data sources as the rest of your company infrastructure evolves and takes on more tools with data your model needs, refreshing the model regularly, and maintaining marketing & sales alignment.

Making the switch to a predictive lead scoring model only truly makes sense when your sales organization has reached a level of complexity that requires it to sustain repeatable growth.

“Where should my business be?”

Now that we’ve looked at how lead scoring models evolve as your marketing & sales organization grows, let’s come back to our initial conversation about what type of model you need for your current business problems.

As businesses scale, some buy a tank when a bicycle will do, while others a trying to make a horse go as fast as a rocket. We’ve put together a quick benchmark to assess the state of your go-to-market strategy and where your lead scoring model should be at.

Some companies can stick with a Lamb lead scoring model up through 50 employees, while others need a predictive lead scoring model at 75 employees. While there are some clear limiting factors like sales organization complexity and plentiful historical data, understanding the core business problem you’re trying to solve (in order to scale revenue) will help guide reflection as well.

Why Lead Scores don’t reflect your Ideal Customer Profile

Marketing & sales alignment is fragile. Sales pushes back on leads whose scores diverge with their intuition: “why am I getting assigned to a lead based in India. We never close deals in India.” “Why is this lead scored low? We’re supposed to be going after accounts just like this.”

When sales pushes back on lead scoring, they lose confidence in the lead score. They stop using it to prioritize outreach, and don’t followup with good leads sent their way. Marketing feels frustrated about their work not being valued and they see increasing MQL disqualification and reduced conversion rates from MQL to Opportunity. Each side blames the other.

As we’ve discussed this problem with some of the best marketing ops leaders in the software industry, a common source of disconnect was a fundamental misunderstanding of the relationship between Lead Scores & Ideal Customer Profile (ICP). 

Time & time again, marketing & sales teams expect that leads who score the highest should be the ones that most look like their ICP, and that’s false. 

Few teams have explicitly discussed this, so let’s dive in.

Defining your ICP & Lead Score

You’ve done your persona research. You know everything about Grace the Growth Guru, Frank the Finance Freak or Sheila the Sales Sherpa (persona researchers love alliteration). You know exactly the type of customers you want to go after, so you build out your ICP – company size, geography, industry, revenue, integrations – as a function of the type of customer you want to go after. Great.

Your ICP will help guide you in your product roadmap – “What does Molly need?”, “How does this bring value to CompanyX?” – as well as your marketing & sales strategy.

Your ICP is the goal. It’s where you want to go. It can and should be informed by the past (data), but it is a representation of where you want to go, not where you are.

A Lead Score, meanwhile, is a quantifiable valuation of the quality of a lead. In early stage companies, it is often used to weed out spam and elevate big name VIPs to the top. As a company grows, the sales process complexities increase: tiered sales teams for self-serve vs. enterprise, geo-specific assignment, mixed inbound/outbound strategy & growth teams competing against both.

Any lead that looks identical to 100 leads that all turned into opportunities should be routed to sales and prioritized with a VIP treatment. Any lead that looks identical to 100 leads that stick to a free plan or have long sales cycles for low deal amounts should be ignored or prioritized as low importance.

When Lead Score & ICP differ in opinion and why.

“This lead is garbage”

Intuition is a powerful thing, and often serves sales people well as they build relationships with prospects in order to help them solve a core problem; however, sales people interact with <1% of all leads and their sense of lead quality is often based on a single qualitative data point. When a lead is scored high that “doesn’t look good,” it usually comes down to a single data point.

Last year we encountered a sales teams who wanted to override the score for leads based in India. They believed the market was not valuable to them, both in terms of available budget and operational costs. 10% of their new revenue in the previous quarter came from India. When they understood that, they asked that the country to be hidden from their sales team.

There’s a lot to unpack here. Of course it’s not good to have a sweeping bias about an entire country, especially when it is to the detriment to your sales goals. For this company, we’re also not saying that all leads from India should get prioritization. We’re saying that they should prioritize 100 good leads from India, just like 100 good leads from anywhere should.

Intuition is powerful, but Data doesn’t lie. Marketing has a responsibility not only to be data-driven, but to make the insights of that data available to all customer-facing teams. Modern marketing teams can enable modern sales teams not by providing them with 100 data points about every lead, but by providing a few key data points that explain why a lead gets scored the way it does.

At MadKudu we call that Signals and it looks like this:

The combination of relevant firmographic & behavioral data points let’s the sales team know why a lead scored a 92.

Inflection Points

Launching into new verticals & markets can present a real conundrum for your lead score. Historically, leads from, say, Japan, have not converted (because you weren’t targeting the Japanese market, weren’t compliant or a well-suited option), but this year you’re pushing into Japan and your upcoming campaign should bring in hundreds of new leads from Japan. You have expanded your ICP but your Lead Score is still measuring the likelihood of conversion based on historical data.

The same problem arises as companies go up market, selling to increasingly large businesses. The added complexity here is that enterprise sales fundamentally looks different than velocity sales, so even if you’ve closed some enterprise clients in the past, your lead score may be heavily skewed towards velocity sales, making it hard to surface enterprise leads. You’ve expanded your ICP to include a new breed of business.

Overcoming Predictive Bias & Training your Model

ICP & Lead Scores digress at inflection points. Fast-growing businesses need to increasing existing market share at the same time as the seek to expand into new markets. Among leading go-to-market teams, we’ve observed two trends that make this combination possible at scale.

Creating dedicated Business Development & Growth teams whose purpose is to bypass lead score and focusing on new market development areas, booking meetings directly for AEs. BDRs & Growth teams build up historical data over 3-6 months that can be used to retrain your lead score to account for the new market data.  

Another method is to create hard exclusions to override your lead score. If your lead score is operationalized across the entire buyer journey, this is a quick way to experiment with new markets in an automatic way, but this should be rare. Hard exclusions are like a blindfold for predictive lead scores – you’re removing one of many signals from the equation, increasing the likelihood of false positives. As you make strategic changes to your business, they may be necessary for overcoming predictive bias.

Actionable Definitions

It is vital to have a common understanding across marketing & sales around what these tools are, how they are made, and how they should be used. While your Ideal Customer Profile paints a picture of who your vision will serve in the next year, your lead score needs to be the best way for sales to prioritize outreach.

I’ve compiled a quick chart based on what we’ve seen from customers to illustrate the differences between ICP & Lead Score.  

This is something you can use to start a conversation at your next marketing & sales meeting about how Outbound Sales Strategy should be informed by your ICP or how you can increase forecast revenue for the next quarter based on the number of highly qualified leads you’re bringing in.

How To: Create your first Sales SLA Report

We’ve mentioned on countless occasions the importance of having a Smarketing SLA. In this article we provide instructions to create your first sales SLA report in Salesforce.

Pre-requisites

While other CRM will soon be documented, the focus of this “how to” is really on setting up Salesforce and building Sales SLA reports to measure the consistency of your SDR team’s follow up.

General considerations

To create the SLA Report, you’ll need to have the right information available at the Lead level. The overall idea is to create a “Time to Touch” field on the Lead object.

Whenever that field’s value is 0, the lead was never touched. This means that your reps never got to reaching out to that lead (or that it wasn’t tracked).

Whenever the field value is greater than 0, the lead was touched and the field value will is the time difference between the lead creation date and the date of the first touch. We’ll be considering activity completion dates to achieve this. Activities can either be Calls, Emails or Meetings (but you can easily customize this list).

This tutorial leverages Rollup Helper, which is a free app on the AppExchange marketplace and a great way to get started without building custom computations or ETL.

Steps to create your Sales SLA

Enable Aggregations in Salesforce

Install Rollup Helper from the AppExchange

Customize Salesforce fields

Create the following custom fields at the Lead Level:

  • Number of Touches is a number field with 0 decimal
  • Date of First Touch is a Date/Time field

Create aggregated fields with Rollup helper

  • Open Rollup Helper by opening the “App Launcher”
  • Create a new “Date of First Touch” Rollup
    • Child Object = Task
    • Relationship Field = Name ID
    • Rollup Type = Minimum
    • Field = Created Date

Create a Custom Filter

Create a custom filter with the following criteria: Name = Sales Touches

  • Filter Criteria
    • You’ll need to click on “Show More” at the bottom of the Field List
    • Type = Call, Email, Meeting
    • Status = Completed

    • Here is what the filter should look like
    • Click on the “Save” button at the bottom of the page
    • Click on the “Save and Run” button at the bottom of the page
  1. Create a custom filter with the following criteria: Name =Number of First Touch
  • Create a “Number of First Touch” Rollup
    • Child Object = Task
    • Relationship Field = Name ID
    • Rollup Type = Count
    • Select the “Sales Touches” filter that we created earlier
    • Click on “Save” at the bottom of the page
  • Create a Formula Field at the Lead Level
    • Field Type = Formula
    • Field Name = Time to First Touch
    • Formula Return Type = Number
    • Decimal Place = 0
    • Formula: IF ( Number_of_Touches__c > 0,( Date_of_First_Touch__c – CreatedDate)*24, NULL )
    • Blank Field Handling: Treat blank fields as blanks
  • Create the report
    • Create a Bucket field on the “Time to First Touch” that looks like this
    • Group rows by “MK Customer Fit Segment” to look at your SLA based on the quality of leads
    • Group columns by “SLA”
    • Uncheck “Detail Rows” at the bottom of the report to hide the details
    • “Save & Run” and you’re all set

There you have it, you can now start measuring your average, min/max time to contact leads based on channels, reps… This is the first step to being able to identify the biggest areas of improvement for your SDR team.

Want to learn more or need help, please do reach out here

Why SDRs are at odds with Lead Scores

I don’t think I’m giving away any trade secrets by revealing that SDRs aren’t always the biggest fans of lead scores. Whether implementing a lead score built internally or a solution like MadKudu, SDRs are in the precarious position of being the primary users and having very little influence over the score itself.

SDRs carry a lot of intuition of what makes a lead good or bad. They aren’t surprised when a lead they perceive as good/bad is rated as such. And yet, they are viscerally frustrated when a lead they perceive as ‘bad’ is rated otherwise, and vice versa. Even if the lead score is scoring leads perfect, an SDR’s core metric – the number of demo’s booked – is often undermined by the lead score.

Lead scores are meant to filter out bad leads while surfacing lead with the highest probability to convert to customer. A poor-performing lead score might surface leads that are likely to get a phone call, but not likely to convert. These care called NiNas (No Intent. No Authority), and they are like grease in your funnel – they look like they should go down smooth, and then they dry up halfway down the funnel, slowing down everything else that should pass through easily.

NiNa’s are great for an SDR’s quota, and while we know that NiNa’s aren’t good for the overall business, this means that a good lead score is removing one of the easiest ways for an SDR to make quota.

Lead Scores should serve SDRS

While not exactly a black box, Lead Scores have historically operated as such for SDRs. Their purpose is to help SDRs prioritize the highest-value leads, which should be great for helping them hit their quota; however, without knowing why a lead is good, lead scores provide little more than expectations for how the engagement should go.

At the same time, Lead Scores are calculated by measuring a lot of valuable information, most of which is not visible to the SDR. Beyond job title and employee count, lead scores evaluate the predicted revenue of each company, the size of specific teams, the tech stack & tools that a company uses, whether their solution is B2B or B2C, whether it has a free trial or not, whether they’ve raised venture capital, and much more. There can be thousands of signals that are weighed initially in order to figure out which ones are the best determiners of success, against which every lead will be measured.

MadKudu Signals sitting inside Salesforce
Sample MadKudu Signals sitting inside Salesforce

In the above example, we can see how valuable it is to know that the lead is performing 150K daily API calls, or that their company has multiple active users on the account , or that they are using Salesforce: these are indicators of the buyer persona, the use case, and therefore of the right message for the SDR to send.

For SDRs, these signals are context. Context for why a lead is a good lead, and that’s exactly how a Lead Score and serve an SDR. Constructing the right message, understanding where your lead is coming from, identifying what the tipping point is that made them sales-ready: SDRs and Lead Scores are trying to do the exact same thing.

With one customer, MadKudu was able to demonstrate a disproportionate ratio between Opportunities created and Opportunities won – another way to look at that is prospects that made it past an SDR vs. prospects that made it past an AE. What you can see above is that having ‘Manager’ or ‘Operations/HR’ in a prospects title negatively impacted their odds of getting through an SDR (or negatively impacted an SDR’s chances of getting them to an AE), while it greatly increased their chances of becoming a customer if they made it to an AE.

Knowing which kinds of titles are good for AE’s can help SDRs understand what to spend more time on, but it can also help SDR managers better train their SDRs on how to win with those personas.

Speaking with Francis on our weekly podcast, it was clear to me how important it is for SDRs to buy in to a Lead Score. If you’re in charge of implementing a lead score, you need to bring SDRs into the conversation early to understand how the lead score can serve them. Making a lead score actionable for SDRs means that your front line for feedback on how well your score is performing will be more incentivized to work with the score instead of working against the score.

How we use Zapier to score Mailchimp subscribers

There’s no better way to get your story out there than to create engaging content with which your target audience identifies. At MadKudu, we love sharing data-driven insights and learnings from our experience working with Marketing Operations professionals, which has allowed us to take the value we strive to bring our customers every day and make it available to the marketing ops community as a whole.

As interest in our content has grown, it was only natural that we leverage Zapier in order to quickly understand who was signing up and whether we should take the relationship to the next level.

Zapier is a great way for SaaS companies like us to quickly build automated workflows around the tools we already use to make sure our customers have a frictionless relevant journey. We don’t want to push every Mailchimp subscriber to Salesforce, because not only would that create a heap of contacts that aren’t sales-ready, but we may end up inadvertently reaching out to contacts who don’t need MadKudu yet, giving them a negative first impression of us as a potential customer.

Today we are able to see who is signing up for our newsletter that sales should be paying attention to, and let’s see how:

Step 1: Scoring new newsletter subscribers

The first step is to make sure you grab all new subscribers. Zapier makes that super easy with their Mailchimp integration

Next we want to send those new subscribers to MadKudu to be analyzed. While MadKudu customers have a dedicated MadKudu integration, Zapier users who aren’t a MadKudu customer can also leverage Zapier’s native Lead Score app, which is (you guessed it) powered by MadKudu.

Step 2: Filter by Lead Score

We’ve got our MadKudu score already configured so after I feed my new subscriber to MadKudu, I’m going to run a quick filter to make sure we only do something if the Lead Score is “good” or “very good.”

If you’re worried that the bar will filter out potentially interesting leads, consider this a confidence test of your lead score.

Zapier Filtering by Lead Score Quality

Step 3: Take Action, Communicate!

For MailChimp signups that pass our Lead Score filter, we next leverage the SalesForce integration in Zapier to either find the existing contact inside Salesforce (they may already be there) or create a new lead. SalesForce has made this very easy to do with the “Find or Create Lead” action in Zapier.

Once we’ve communicated synced our Mailchimp lead to Salesforce, we use the Slack integration on Zapier to communicate everything we’ve created so far to a dedicated #notif-madkudu channel, which broadcasts all the quality leads coming from all of our lead generation channels.

Directly inside Slack, our team can get actionable insights:

  • The MadKudu score, represented as 3 Stars (normal stars for Good/ twinkling for Very Good)
  • The signals that MadKudu identified in this lead, both positive and negative
  • A link to the lead in Salesforce, for anyone who wants to take action/review

Actionable Lead Scoring applied to your Newsletter

Our goal here isn’t to reach out to newsletter subscribers – we want to build a long-term relationship with them, and we’re happy to keep delivering them quality content until their ready to talk about actionable lead scoring. What we’re able to do is see qualitatively & quantitatively the number of newsletter subscribers we have who are a good fit for MadKudu today.

This helps marketing & sales stay aligned on the same goal. Marketing is measuring newsletter growth with the same metric its using to measure SQL generation.

Segmenting Funnel Analysis by Customer Fit

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations – like applying lead scoring to funnel analysis. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

A lead score is the foundation for your marketing & sales alignment. It creates accountability for both teams and is the foundation of a strong Sales SLA. A foundation is only as useful as what you build on top of it, and that’s why we talk about Actionable Lead Scoring – leveraging your lead score to create a frictionless journey. Today we’re going to focus on how you can leverage your lead score in funnel analysis to see where your best leads are falling off.

Funnel Analysis & Actionable Intelligence

Understanding the customer journey’s inflection points and conversion rates is essential to scaling & maintaining success as a software company; however, the analysis you’re doing is just as important as the data you’re using to generate that analysis.

The goal of funnel analysis is to look at ways to remove friction from the customer journey, to improve activation & conversion, and to make sure that the users who should engage most with your product do. Accomplishing that goal without segmenting by lead score is like turning every lead into an opportunity in sales force and then trying to improve your deal won rate. You need to start with the right metric by answering the right question: what are my best leads doing and how can I make their journey better?

If you're not applying lead score to funnel analysis, you're making decisions based on flawed data.

Applying Lead Score to Funnel Analysis

Let’s imagine you want to look at the first 15 days of user activity in your self-service product, which corresponds to your 14-day free trial and immediate conversion. Of course, you already know that 50% of conversion on freemium occurs after the trial expires, but you’re looking to identify engagement drop-off before the trial even expires. After all, customers can’t convert if they don’t stay active.

A simple cohort analysis of all users who signed up over a two-week period would show that over 60% are dropping off in the first 24 hours, a smaller chunk 5 days out, and another group at the end of trial. You might conclude that you need to rework your onboarding drip campaign’s first emails in order to combat that big next-day dropoff. That would make sense, except are the people who are dropping off the prospects that matter most? Probably not.

Very good leads have a different funnel than very bad leads

One MadKudu came to this exact same conclusion, and despite various drip campaign tests, they didn’t see that 60% drop-off move. Then we segmented their  funnel analysis, looking at how very good, good, bad & very bad leads acted, and we found that most of that 60% drop-off was very bad leads: they had made their sign-up process so frictionless that they were getting spam sign-ups who were never going to actually use their product. As it turned out, that small dip after 5 days corresponding to the biggest area of drop-off for very good leads, who were dropping off at the end of their intense drip campaign which only lasted 5 days.

In this case, not segmenting by customer fit completely masked where their focus should be, and they spent time trying to get spam signups to stay engaged with their product instead of looking at how their highest value prospects were engaging with their product.

Our recommended Setup

If you’re looking to start segmenting funnel analysis by Customer Fit, our recommended MarTech stack is to feed MadKudu into product analytics solution Amplitude using Segment‘s customer data platform.

Account-Based Engagement and the Fallacy of Job Titles

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations, such as Account-Based Engagement. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

As we saw recently with the Sales SLA, the path to alignment often starts & ends with clear definitions of metrics. The leads marketing hands to sales need to have the same definition & measurement for success, which is where actionable lead scoring plays a key role in establishing lasting alignment.

If we step back from Sales & Marketing and look at aligning each department to business objectives, we can see that metric disjunction can result in each individual team being successful while ultimately failing to create a relevant customer journey at scale.

The fallacy of job titles

One area where we often observe this is when we run funnel analysis by customer fit and look at job titles as predictors of activation and conversion. On self-serve tools such as API-based products, we often see that someone with a developer title is more likely to activate but very unlikely to convert (that is, to hand over the credit card), whereas someone with a CEO/owner title is more likely to give a credit card, but less likely to convert.

One analysis we recently ran for a customer demonstrated that perfect:

How job title affects conversion | Account-Based Engagement

  • Developers convert 60% less than the average user
  • Founders, CEOs & marketing convert 70-80% than the average user.

When we look at conversion & activation side-by-side for this same customer, the number speak for themselves:

Conversion vs. Activation | Account-Based Engagement

  • Founders/CEOs don’t use the software that much but end up converting higly
  • Product & Project managers have a higher activation but lower conversion rate

Product teams are historically motivated by increasing activation by building an increasingly engaging product; however, a developer is unlikely to respond to marketing’s nurturing emails or jump on a first sales call no matter how active they are on the product.

Likewise with more sales-driven products like enterprise software, SDRs are often singularly focused on the number of meetings they can generate for their AEs; however, low-level team members are significantly more likely to jump on a phone call and significantly less likely to convert as compared to their director counterpart.

In both of these instances, we see that product & sales development are able to optimize for their metric without accomplishing the core business objective of creating a great customer journey.

How Account-Based Engagement changes the rules

What this comes back to is account-based engagement, a nascent terminology in the marketing space stemming from the principal of account-based marketing but extending it across the entire customer journey and to all customer-facing teams. Where account-based marketing encourages running campaigns to generate interest not at the individual lead level but the account level – especially important when you have multiple stakeholders in the decision-making process – account-based engagement extends that to all teams, meaning that:

  • Product teams should seek not only to make as many active users as possible, but to create active accounts: building features that encourage getting other stakeholders involved or making it easy for your hero to evangelize your product value to other stakeholders.
  • Marketing teams should not seek to generate marketing qualified leads but marketing qualified accounts, including nurturing existing accounts in order to get other stakeholders involved so as to set sales up for success
  • SDRs should not seek to generate meetings at the account level, not at the lead level, and shouldn’t be working on accounts where the necessary stakeholders are not already involved.

Account-Based Engagement | Identifying hidden opportunities

We’ve been recently working with two of our bigger customers who have a prosumer user base to identify marketing-qualified accounts that aren’t getting attention. We do this by looking not only at customer-fit at the account level – does the account look like the type of accounts that typically convert when sales engages – but also at behavioral-fit: are they engaging with the product the way paying customers typically do?

Sales reps who are qualifying leads as soon as the account is created aren’t going to be able to sift through the hundreds of warm accounts to identify which accounts have engaged properly (and been properly engaged) to be sales-ready; however, this is core to Account-Based Engagement. Just as our Sales SLA gives a common metric for marketing & sales to work towards, so Product, Customer Success, Sales & Marketing all need to have a common qualification criteria for an account in order to be aligned on how best to achieve business goals.

Remember: In B2B, you’re not selling to users, you’re selling to Accounts

The goal is not to reduce all teams to a single metric like revenue-generated, but rather to help reduce the natural tendency to game a metric by linking a common thread between the metrics that we use to measure success. That thread is Accounts.

It is all too easy to lose track of the fact that selling B2B software means that a company is going to buy your software, not a person. There are users, decision-makers, stakeholders and other advisors in the buying process, but at the end of the day a company is going to make a decision about whether to pay another company for their solutions. In this respect, every team should be focused on how to acquire, activate, convert & retain accounts, because at the end of the day it is not a user that will churn but an account.