Scaling live chat with predictive lead scoring

Today’s buyer has evolved. The rise of smartphones, messaging apps, and other groundbreaking technologies has led to a new set of expectations for buyers. They can get exactly what they want in real-time, on-demand, whether that’s scheduling a ride, booking a place to stay or renting a movie.

Of course, these expectations are quickly carrying over to B2B sales teams too. Forms and lengthy follow-ups are out. Live chat and real-time conversations are in.

“A recent study from Twilio showed that 9 out of 10 consumers said they want to be able to use messaging to talk to businesses”

Thousands of businesses are embracing this reality.

More and more companies install live chat like Intercom. Intercom is a leading solution for live chat that’s proven to help you convert visitors with intent.

It becomes a big priority, if not the #1 priority, for a lot of CMOs because it’s a new way to generate SQLs. But handling the live chat is not an easy story with a happy ending.

Live chat is amazing when it comes to sending automated messages. However, automated messages don’t always lead to conversions, and sales teams usually don’t want to talk to unqualified leads/customers/free users.

How many sales come to you complaining about the quality of leads coming to the website? Automated messages allow you to scale your live chat. But does it make sense to be able to automate all the messages you send, yet not be able to qualify faster?

The idea is to unlock modern marketing plays and allow you to schedule qualified calls while you sleep.

At our webinar, we’ll share the top 5 plays we see modern marketing teams using on live chat solutions like Intercom, and we’ll give you a rundown of how we designed MadKudu’s Intercom integration to make it easier than ever to scale live chat.

Feel free to join us on the 20th of June!

Combining MadKudu’s intelligence with HubSpot to supercharge your buyer journey

When SaaS companies first start welcoming customers on board, HubSpot is a natural choice. Its pricing scales with your needs and its tools reach across the entire buyer journey: from lead generation to customer acquisition. By using HubSpot’s various engagement tools like landing pages, ad management, live chat & email campaigns, businesses generate valuable data about what turns prospects into customers.

As they grow, SaaS companies like InVision, Deputy, Front & AppCues complement HubSpot with best-in-breed tools for mission-critical tasks: product analytics data to track user onboarding & activation, data enrichment to learn more about user behavior and profile, and a unified platform for customer data generated by your product and leveraged by all teams.

As data and engagement tools spread, much of HubSpot’s core marketing & sales tools scale very well; however, SaaS companies struggle to leverage all of their customer data inside HubSpot. They want to feed MadKudu’s intelligence & signals into HubSpot to build better buyer journeys.

Up until today, there have been only two ways to score leads in HubSpot: building your own lead score manually using their point-based framework, or using their predictive lead scoring solution. Manual lead scoring requires constant upkeep, tweaking & analysis to make sure you’re identifying the 20% of the leads that will generate 80% of revenue. Manual scoring is most often based on preconceptions, which means it misses non-obvious leads that are actually a great fit in disguise.

HubSpot’s predictive lead scoring solution only leverages HubSpot data and misses out on the rest of your 1st party data and enrichment tools.

That’s why we’re super excited to release MadKudu for HubSpot, baking MadKudu into the entire HubSpot marketing & sales suite of tools. Out of the box, you will be able to:

  • Use all your customer data inside your scoring (Segment or Amplitude for in-app activity, Stripe for billing…)
  • Have the lead/account score available in all your other tools so you can optimize the entire funnel
  • Get account scores, for simple and powerful ABM execution
  • Understand why each lead is scored the way it is, thanks to our powerful signals and correlations.

We’ve been testing MadKudu for HubSpot in beta for the past few weeks with our great customers. We’re really excited to get it into more HubSpot users’ hands. You can learn more about MadKudu for HubSpot here

MadKudu + SOC 2 Type II

At MadKudu, we understand that securing and protecting our customer’s data is of utmost importance. To that end, we set out last year to become SOC 2 Type II compliant. SOC 2 Type II reports are the software industry’s gold standard for evaluating internal organizational controls. We are proud to announce our SOC 2 Type II report as proof of our commitment to protecting your data across all aspects of our organization. Our report was completed by Armanino LLP, a respected authority in the field, following the guidelines of the AICPA.

Whereas a Type I report is a point in time assessment of a company’s systems and what controls are in place to support them, a SOC 2 Type II report goes the extra mile – it evaluates, over many months, whether those systems and controls operate effectively.

This successful audit is only a milestone and we will continue our efforts to provide and ensure a safe, secure environment for your data.
To learn more about security at MadKudu or request a copy of the report, please visit https://www.madkudu.com/security

Hac Phan is a DevOps Engineer at MadKudu. He is, among many things, responsible for security and compliance.

Beyond MQLs & SQLs: How to use your product to qualify leads

Ask a startup how they generate new customers and, chances are, they’ll talk in terms of marketing qualified leads and sales qualified leads.

Marketing qualified leads (or MQLs) are website visitors who have the potential to become great customers: they’ve actively engaged with your company, and they look like a good fit for getting value from your product. Sales qualified leads (or SQLs) have gone a step further, taking an action that suggests a willingness to actually buy from you.

But only 13% of marketing qualified leads ever become sales qualified, according to a B2B sales benchmark report. Those who do require 84 days of education and nurturing to be deemed “sales ready.” Even then, just 6% of those supposedly “sales-ready” prospects ever become paying customers.

Thankfully, there’s another way—a qualification framework that generates active, engaged leads that require less effort to close and make better long-term customers. Instead of MQLs and SQLs, we need to start talking about PQLs—product qualified leads.

What are product qualified leads?

When a lead becomes marketing or sales qualified, we’re using their interactions with our website—downloading an eGuide, viewing a pricing page, or submitting a contact form—to predict when they’re ready to buy.

We might reach out to try to arrange a sales call with somebody who just wanted to read a free whitepaper or cold email a website visitor who clicked onto a pricing page by mistake. This common issue is having someone from the right-fit company being captured as a lead, but not being hot enough to generate a conversation leads to a high volume of MQLs but a low volume of SQLs.

Instead, the PQL—product qualified lead—framework is radically different, using in-app behavior to work out exactly when a lead is ready to purchase.

‍The MQL/SQL framework relies on website behavior to predict when a lead is ready to buy. The PQL framework uses a more direct indicator: in-app behavior.

The PQL model works because SaaS companies can be set up to allow users to get value from the product before making a serious commitment:

  • Freemium products, like Airtable, offer both free and paid packages.
  • Free trial products, like ProsperWorks, offer an introductory time- or feature-limited product experience.
  • Self-service products, like Zuora, allow users to spend as little (or as much) as they like.

This, in turn, allows you to identify when leads become sales-ready with much greater accuracy. Instead of relying on a proxy of sales readiness, it’s possible to see how users actually engage with your product. It works by identifying actions that show users have explored the product and received enough value that they’re ready for a sales conversation.

Product qualified leads are also much easier to turn into customers. The MQL/SQL framework puts the cart before the horse, selling to users before they’ve successfully adopted the product. Sales reps need to do more education and persuasion before a sale, and customer success reps have to work harder to ensure adoption afterwards.

By definition, product qualified leads have already adopted your product. They understand how it works, and they’re getting value from it. As a result, the sales process is little more than a formality. In fact, PQLs close customers at 6x the rate of SQLs, according to Redpoint Ventures partner Tomasz Tunguz.

How to find PQLs

In its simplest form, identifying product qualified leads requires a behavioral trigger: an in-app action or series of actions that correlates most often to users being sales-ready. When a prospect takes that action, they become product qualified and get passed on to the sales team.

For a company like Expensify, that trigger might be the submission of 10 invoices. For Slack, it was a team sending 2,000 messages.

Based on experience of which companies stuck with us and which didn’t, we decided that any team that has exchanged 2,000 messages in its history has tried Slack—really tried it.” –Stewart Butterfield, Slack

At that 2,000 message milestone, a typical team of 10 people has used Slack for a week. They’ve experienced the product’s core features—quick collaboration, fewer email exchanges, rapid file sharing—and seen firsthand the impact Slack could have on their long-term productivity. Once a team has hit that milestone, they have a 93% retention rate.

Using a tool like MadKudu, these behavioral triggers can be combined with demographic data—looking both at customer fit and likelihood to buy—to fully automate the lead qualification process.  Pushing the MadKudu score to sales tools like Salesforce allows sales reps to view a detailed breakdown of every lead’s current qualification status, while allowing Ops teams to prioritize lead routing as well.

‍MadKudu combines in-app behavior with demographic data, fully automating the lead-qualification process.

Working out manually which actions correlate with sales-readiness can be a tedious process best left to MadKudu; product analytics tools like Amplitude play a valuable role in the modern marketing stack.

More than just a great way to feed precious behavioral data to MadKudu, products like Amplitude help product and marketing teams to dive in granularly into the customer journey—allowing you to drill-down into the last five actions taken by your users—which can help identify opportunities to improve the customer journey and inspiration for experiments.

By tracking conversion rates resulting from each of these actions, you can learn which behavioral signals are indicative of sales intent and adjust your definition of a product qualified lead to match.

How to prioritize PQLs

For a company that generates thousands of leads, it isn’t always enough to have a binary, qualified/unqualified system. You also need a way to prioritize following up with those leads.

This was a problem experienced by HubSpot’s VP of Product, Christopher O’Donnell. When he dug into the company’s acquisition strategy, he identified four distinct sub-types of product qualified leads:

  • Free users who have hit a given PQL criteria
  • Users who have requested sales assistance
  • Users who have reached a limit in their free plans
  • Users who have purchased without any sales involvement

At one end of the spectrum, we have a free user who has triggered our PQL criteria, completing an action like sending 2,000 messages or submitting 10 invoices. Though this user has experienced the core value of our product, there’s no guarantee they’re ready to buy: they might be happy sticking to our free plan.

Of all our PQLs, these users will require the most sales involvement to become customers: they need to be persuaded that a paid plan is significantly more valuable than the free plan they’re currently using.

Most companies define “Hand raisers” as PQLs who have filled out a contact form to explicitly ask for sales support. They’re more sales-ready than other free users but might still require a few sales touches to convert.

Next are users who have tripped a feature limit on their free plan, like a Slack user reaching their message history limit. These users have successfully adopted the product, so much so that their usage has outstripped their free plan, requiring very little persuasion to become paying customers.

At the far end of the spectrum are “touchless purchases,” free users who are so sold on the value of the product that they become fully-fledged customers without any sales action.

Each of these users is product qualified, but they differ in how much nurturing they need to become customers. By identifying multiple product qualification criteria, it becomes easier to recognize these differences and tailor the sales process to their exact needs.

How to turn PQLs into customers

Instead of trying to persuade someone to purchase a product they’ve never used, you’re helping them get more value from a product they already love. Instead of guessing their needs from a contact form submission or the pages they’ve visited on your website, you can actually see how they’re using your product and shape your sales process to match.

  • You can find a lead’s most-used product feature and talk through the extra functionality that comes with a paid plan.
  • You can dig into the limitations they’ve hit on their free package and help them choose a tailor-made plan that matches how they actually use your product.
  • You can work out which actions they need to take to get more value from the product and send personalized nurturing emails and in-app reminders.

An automatic up-sell message is triggered when a Slack team hits their 10,000 message limit—making it quick, easy and beneficial to upgrade to a paid plan.

By switching to PQLs, you’re no longer reliant on slick sales patter to close deals. You’re letting your product—the heart of your business, and the reason your customers part with their money each month—sell itself.

Putting customer experience before customer revenue

Product qualification isn’t just a way to speed up sales and improve conversion rates. By using in-app engagement as your primary qualification metric, you’re making a clear, unequivocal declaration: customer experience comes before customer revenue.

Instead of focusing your energy on selling a product to people who have never used it, you’re offering a great product experience, up front, and using the strength of your product to sell itself. You’re using your sales team to support the product and making product adoption your number one priority. You’re generating leads that are easier to close—and most importantly—creating customers who are guaranteed to get value from your product.

How we use data and machine learning to solve the lead quality problem

This post originally appeared on Clearbit’s Blog.

When Simon Whittick joined Geckoboard as its first VP of Marketing, he took all the standard steps to attract more visitors to their site, convert them, and grow the SaaS company’s revenue. He and his team wrote content for their popular blog, ran paid advertising campaigns, and set up email nurture campaigns. At the end of his first year, he was as successful as almost any other marketing executive in the industry. The site was attracting hundreds of thousands of visitors every month, and the business was booking millions in annual recurring revenue. But unknowingly, his success was driving one of his coworkers crazy.

While 10,000 leads a month earned Whittick applause at the company’s weekly all-hands meeting, it was keeping Geckoboard’s only sales development rep (SDR), Alex Bates, at the office on nights and weekends. Many of the inbound leads were self-serve customers who required no conversation with sales, or tire kickers who were not ready to buy. This left Alex manually qualifying leads and wasting tons of his time.

As a result, Geckoboard’s sales efficiency—one of the most critical metrics for any company—was slumping. In other words, Whittick wasn’t only driving a junior sales rep crazy; he was leaving money on the table.

Over the course of the next year, Whittick built a data-backed machine learning process to solve his company’s lead-qualification problems. In the process, he turned Bates into not only an adoring fan of his, but a one-man sales team as efficient as a typical ten-person SDR team. Without any technical background, Whittick figured out a way to change the shape of his company using data and a bit of machine learning.

One day toward the end of last year, Bates and Whittick sat down to discuss how they could solve their lead-quality problem. They had close to 10,000 leads coming in each month, but they needed to figure out which of those leads to send to sales. Their first instinct was to refine their ideal customer profile. They’d both read all the sales and marketing blogs preaching its importance. They started with a Ideal Customer Profile based on some simple audience rules.

On paper, Geckoboard’s ideal customer was a software company with more than 100 employees; they typically sold to a director or VP. But the truth was that a lot of companies outside that explicit profile would be great customers. For example, their initial model excluded a company with 95 employees even if it looked almost identical to one of their best customers. When they looked at their past data, they learned that leads in what they believed to be their ideal customer profile converted at twice the rate. But they only accounted for 0.7% of the conversions. They needed a more nuanced and flexible inbound model.

basic-ideal-customer-lead-qualification-results

Prior to joining the Geckoboard team, Whittick had worked for Marin Software. While he was there, he began to notice a shift in the way companies approached marketing. The most forward-thinking companies had begun to hire technical employees with degrees in mathematics instead of business. He heard stories of companies that were replacing entire teams and doubling revenue by using publicly (or privately) available information and crunching it to their advantage. As time went on, many of those employees left their jobs to provide the same level of automation to smaller companies without the budget to hire data scientists.

Between his time at Marin Software and Geckoboard, dozens of startups popped up to help larger companies embrace the data revolution. Companies like Clearbit mined the web for demographic and firmographic data that could be used to better filter leads. My own company, MadKudu, makes it possible to pull insights from that data without having a PhD in computer science. By 2016, the entire marketing technology landscape had shifted. With an executive team that embraced innovation and big bets, Whittick decided to make it Geckoboard’s competitive advantage.

The first step Whittick took was to develop his own flexible point-based scoring system. Previously a lead was either given a 1 or a 0. A lead was either a software company with 100 or more employees or it wasn’t. It was binary. The obvious problem of this model was that a company with 95 employees would be excluded. In addition, a company with 100 employees was given the same score as a company with 1,000 employees, even though the latter was twice as valuable.

In his new model, Whittick gave leads a score based on multiple criteria. For example, he’d give a lead 50 points for having 500 employees or more, and negative 2 points if it had less than 10 employees. A director-level job title would receive 10 points, whereas a manager would only receive 3. This created an exponentially larger score range, which meant that Bates could prioritize leads. If he called the top score leads, he’d have the option to call B-tier leads. The model was weighted toward the large accounts Geckoboard knew could significantly impact revenue. For example, a US-based real estate company with 500 employees and a director buyer would be routed to the top of Alex’s lead list, even though it didn’t fit the software industry criteria.

advanced-point-based-lead-scoring@1x

This new model was similar to the way SDRs have scored leads for over a decade, only more efficient. Prior to automated lead scoring, sales reps were told by their managers to prioritize leads based on four criteria: budget, authority, need, and timing (or as it’s commonly referred to, BANT). This method is more flexible than a rigid ideal customer profile, but it is only as strong as the rep behind it. Human error, irrational judgment, and varying levels of experience lead to a process with little rhyme or reason. That’s why Whittick chose to automate the task and take humans out of the process entirely.

RESULTS-advanced-point-based-lead-scoring@1x

Immediately the company began to see results from their lead-scoring investment. Within the first month, leads were converting at twice the rate. As a result, Bates was spending less time to close more deals. Sales efficiency—revenue collected divided by the time and resources to earn it—rose significantly. Still, Whittick knew he could improve the results and save Bates even more time.

One of the biggest shifts that Whittick saw in the technology industry was the speed at which data could be put to use as a result of new tools. In the old world that he inhabited, a lead couldn’t be scored until it hit a company’s CRM. Enrichment software took hours to append valuable data to a lead.

That information could be sent to the CRM and the lead scored accordingly before the visitor began typing in the next text box.

After his first lead scoring success, Whittick decided to make another bet. Bates frequently complained about leads that were obviously bad fits—the type of conversation that takes 30 seconds to know there isn’t a mutual fit. Many of the companies were too small to need sophisticated dashboards yet. Whittick enlisted one of the company’s front-end developers to help him solve the problem. They built logic into the product demo request page that would ask for a visitor’s email address and then, before sending them to the next page, score the lead. On the back end, additional information would be appended to the lead using Clearbit, and it would be run through MadKudu’s scoring algorithm. If it received a high-enough score, the next page would ask for the lead’s phone number and tell them to expect a call shortly; if the score was low, they’d be routed through a low-touch email cadence. It was radically successful.

madkudu form demo geckoboard

Before implementing their real-time lead scoring solution, only about 15% of Bates’ conversations were meaningful. The new website logic meant that he could cut 85% of the calls he took every day and focus on higher quality accounts. Once again, sales efficiency increased significantly.

In addition to the speed at which information could be appended, processed, and acted on, Whittick saw another change in the marketing technology world: there was suddenly more data than most companies knew what to do with. Marketers could know what CRM, email server, and chat service a company used. They could know when a company was hiring a new employee, when they were written about by a major news outlet, and how much money they’d raised. It was overwhelming. But thanks to tools like Segment, marketers could pipe all that data into a CRM or marketing automation system and act on it. Then they could combine it with information like how frequently someone visited their own site, how often they emailed sales or support, and when they went through a specific part of the onboarding process. For a data-driven marketer like Whittick, this new world was utopia.

In conversations with Bates, Whittick learned that the best leads were ones that went through their onboarding process pre-sales conversation. During the Geckoboard free trial, users were prompted to build their first dashboard, connect a data source, and upload their company’s logo and color palette. As is the case with many SaaS solutions, most users dropped off before completing all the steps. Those users weren’t ready for a conversation with sales. But when Bates was looking at his lead list, he had no way of knowing whether or not a free trial user had completed onboarding. As a result, he was spending at least half of his time with people that weren’t ready to talk or buy.

Combining usage data from the website and their app, Whittick set out to refine the lead scoring model even further. Each time a free-trial user completed a step in the onboarding process, it was recorded and sent back to the CRM using Segment. The model would then give that lead a couple of additional points. If the user completed all of the steps, bonus points would be added and the lead would be raised to the top of Bates’ lead queue in Salesforce. Again, Bates began spending less time talking to customers prematurely and more time having high-quality conversations that led to revenue. Whittick had figured out how to save the sales team time and increase sales efficiency further.

behaviorial-plus-advanced-points-based

But while Whittick and Bates were celebrating their improved conversion rate success, a new problem was emerging. By the summer of 2016, they had enlisted my team at MadKudu to automate their lead scoring. Rather than manually analyzing conversion data and adjusting their lead scoring model accordingly, our machine learning tool was built to do all the work for them. There was a small problem. Today, machine learning algorithms are only as strong as the humans instructing them. In other words, they are incredibly efficient at analyzing huge sets of data and optimizing toward an end result, but a human is responsible for setting that end result. Early on, Whittick set up the model so that it would optimize for the shortest possible sales cycle and the highest account value. He didn’t, however, instruct it to account for churn, an essential metric for any SaaS company. As a result, the model was sending Bates leads that closed quickly, but dropped the service fast too. Fortunately, the solution was simple.

After learning about the problems with his model, Whittick instructed MadKudu’s algorithm to analyze customers by lifetime value (LTV) and adjust the model to optimize for that. He also instructed it to analyze the accounts that churned quickly and score leads that looked like this negatively.

Example: For Geckoboard, Digital Agencies were very likely to convert and the old scoring algorithm scored them highly. However, agencies had a 5X chance of churning after 3 months when the project they were working on ended.

At this point, the leads being sent to Bates were significantly better in aggregate than the leads he had previously been receiving. However, there were still false positives that would throw him off. While the overall stats on scored leads were looking great, the mistakes the model made hurt sales and marketing trust and were hard to accept. To combat this and make the qualification model close to perfect, Whittick had Bates start flagging any highly scored leads that made it through.

Through this process, they found that many of the bad leads that made it through were students (student@devbootcamp.com), fake signups (steve@apple.com), or more traditional companies that did not have the technology profile of a company who would likely use Geckoboard (tractors@acmefarmequipment.com). Whittick was then able to add specific, derived features to their scoring system to effectively filter these leads out and yet again improve the leads making it to Bates.

At this point, Geckoboard can predict 80% of their conversions from just 12% of their signups. By increasing sales efficiency with machine learning, Whittick found a way to enable Bates to do the work an average sales team of five could typically handle.

From self-driving trucks to food delivery robots, this is the story of twenty-first-century business. Companies like Geckoboard are employing fewer people and creating more economic value than enterprises ten times their size. Leaders like Whittick are center stage in this revolutionary tale, figuring out how to optimize sales efficiency or conversion rates or any other metric given to them, just like the artificial intelligence they now employ. But of course, this has been happening over many years, even decades. The difference—and this cannot be overstated—is that Whittick doesn’t have a PhD in applied math or computer science. The technology available to marketers today enables companies to generate twice the revenue with half the people.

Predicted Returns: Three metrics to measure paid acquisition performance in SaaS

For performance marketers, paid acquisition is an ever-changing jungle of opportunities and traps; a galaxy of ad networks, formats, channels & keywords, each with their own idiosyncrasies. What makes performance marketing so appealing is that every campaign, click, conversion & bid is trackable and analyzable. Tools like Google Analytics make it easy to comb through, filter, segment & visualize your campaigns, channels, costs & returns. Line your traffic sources by campaign up against your website engagement & conversion metrics and you can see how users behave after they click on each ad.

The problem with pay-per-click in B2B is choosing how to measure ROI. Looking at email generation optimizes for acquiring low-value emails, but the SaaS buyer journey may take weeks or months to convert to actual revenue, which is far too long and far too complex to measure. Performance marketers want to know which spend is yielding results (to double down) and which spend isn’t (to shut it off).

Generating few great leads with a high CPL yields much better results than generating many low-quality leads with a cheap CPL. SaaS marketers need an transaction metric that mimics eCommerce-styles transactions to measure their performance against, and of course we have just the solution.

Measure performance marketing with smarter metrics

The goal of this play is to make sure we are investing our ad dollars into channels that generated qualified pipeline. We want to see which campaigns return a positive ROI in the long-run (though we can cap this definition at, say, 12 months). All we need is MadKudu & Google Analytics (or any analytics solution that allows for segmentation by UTM tags) to get this done.

The play itself is quite basic. As always, at the core of modern marketing operations is leveraging historical customer data to build a model for what your best leads look like and do. We’re going to use MadKudu for this, since it’s easy to operationalize across the entire buyer journey and requires no in-house data scientists. You can read up on about the three stages of lead scoring to start building your own.

Once we’ve embedded MadKudu into our conversion points (lead generation forms, user signups, etc.) – potentially via MadKudu Fastlane – we’re going to send MadKudu data back to a website analytics tool like Google Analytics so that a visitor’s data can now includes its Madkudu score.

Once that’s set up, there are three new ways you can measure your performance:

Cost per Qualified Lead

If your PPC campaigns are pointing to a lead capture form (webinar, eBook, free tool), you can look at which channels & campaigns are bringing in qualified pipeline in an ROI positive way. Specifically, we want to compute the total spend on the campaign divided by the number of good vs. very good leads – some campaigns bring in many leads that don’t convert, while others bring in very good consistently qualified leads. Identify where your generating qualified pipeline efficiently might lead you to cut 50% of your paid ad spend (as it did for Drift).

Predicted Spend vs. Cost Per Lead

This may be more useful for campaigns that are leading to account creation or demo scheduling, where the rest of the buyer journey follows the traditional buyer journey. Here we’re leveraging MadKudu’s ability to map user’s not only to their likelihood to convert but to their predicted spend. This is calculated by looking at how much a lead resembles historical customers in each segment – “do they look like a pro plan?” With predicted spend, we can develop a threshold for cost per lead accordingly: we align our acceptable cost per lead (and therefore bidding threshold) based on the average predicted spend of leads acquired.

By averaging predicted spend across leads who are predicted to convert or not, we’ll be able to quickly optimize for campaigns that acquire any combination of lead values so long as we are not paying on average more than they are worth.

Predicted Value vs. Campaign Cost

The last way we can view performance marketing ROI is by leveraging the predicted value field that MadKudu provides, for example, to Facebook’s Ad Engine. Instead of feeding this data to Facebook to train its bidding engine to bid on our best leads, we can look at all channels of spend through the lens of this metric and answer the question: are the leads we’re acquiring predicted to spend more than what we’re paying for them?

We can look either at average predicted value vs. cost per qualified lead or we can look at the sum total of predicted value vs. the total campaign spend. In both cases, we’re getting a picture of whether the leads we’re acquiring today will be profitable down the line.

Since Predicted Value is designed by looking at predicted spend by plan multiplied by % chance of conversion, we’re accepting a 12 month CAC. We can adjust this either by calculating predicted spend as 6 Months of MRR instead of the plan sticker price (12 months of fully loaded MRR), or by dividing our predicted value by a variable as a function of the CAC threshold we want to set.

The Impact: -50% in Ad Spend

No matter which metric we optimize for – predicted value, predicted spend or cost per qualified lead – we’re arming ourselves with metrics that we can measure in real-time as our campaigns generate leads. Drift integrated MadKudu into Google Analytics and identified campaigns that were generating no qualified leads (although potentially many qualified leads). As a result, overnight they cut 50% of their ad spend and saw a major dip to their website traffic but no meaningful dip in their pipeline creation.

That 50% in recuperated Ad Spend can go into doubling down on existing campaign or creating new campaigns – either way, we’re increasing our performance marketing ROI.

Onboarding growth experiments for your worst users

Understanding who your product is best suited for is critical. If you know what your best leads look like and how they behave, you can design an ideal buyer journey for them, making sure that anyone who looks like your best leads stays on the buyer journey that your best leads take.

That said, like all channels for optimization, you eventually hit diminishing returns. The major holes get filled, and your customer journey works 95% as well as it ever well. What’s worse, by only focusing on creating a great experience for leads who look like leads who have historically converted well, you may create a self-fulfilling prophecy. If you’re only a good fit for your current ICP, you may never be a good fit for the ICPs you want to be targeting in the future.

We see this up close at MadKudu. Since predictive lead scoring models leverage your historical customer data to predict the likelihood that future prospects will convert, if you don’t feed it new data such as successful conversions from leads who historically haven’t converted well, then your ideal customer profile will never change.

Product growth from a predictive modeling perspective can be framed as an exercise in feeding new “training data” that the model can later use to adapt and expand the definition of your ICP, your best leads.

If your product is only accessible via the United States because it requires a US bank account, address or phone number for authentication, leads from outside the U.S. will have a low likelihood of converting. If you build new features or expand into new markets but continue to score leads with old data, you may not give new leads a chance to have a great experience.

 

Growth (for low-touch) & business development (for high-touch) are great teams for circumventing this process, and many MadKudu customers leverage these teams to create new training data by actively pursuing leads who haven’t historically converted well, but that the business would like to target. This can be expanding into new verticals, new markets, or launching new products altogether. All three are areas where historical customer data isn’t a great basis for predicting success, because the aim is to create success that can later be operationalized, orchestrated and scaled.

Parallel onboarding for good & bad leads.

Drift recently gave a talk during the Product-Led Summit about a series of onboarding experiments that segmented their best & worst leads, but that pushed to increase conversion among both segments. Highlighting some of their experiments, it is clear that Drift’s aim was almost to prove the model wrong – that is, they wanted to optimize the chances that a low probability lead would convert, which later could help to retrain the definition of a good/bad lead.

Leads: good vs. bad

Good leads are those who are most likely to convert. In sales-driven growth companies, that means that, if engaged by velocity/enterprise sales, a lead will convert. For product-led growth companies with no-touch models, a good lead is one that is likely to achieve is a certain MRR threshold, if activated.

We define good leads this way – instead of, say, the type of leads we want to convert – because we want to create as much velocity & efficiency for our growth-leading teams as possible. If we send a lead to sales that won’t convert no matter how much we want to, we are incurring an operating cost associated to that rep’s time. That counts double for leads that will convert the same amount whether engaged by sales or not, as we are unnecessarily paying out commission.

Product teams focused on lead activation & conversion waste time running experiments with little to no impact if they don’t properly segment between good & bad leads.

Drift’s 5 Growth Experiments segmented by customer fit

Drift used MadKudu to segment their entire onboarding & activation experience so that good leads and bad leads each received the best next action at each step of their buyer journey.

For Drift, onboarding starts before users create an account. Drift Forces the Funnel by identifying website visitors via IP lookup as they arrive on their website, scoring the account associated to the IP, and then personalizing the website experience based on the MadKudu score and segment.

For their best leads, Drift’s messaging is optimized to provide social proof with customer logos & key figures with a core call-to-action to talk with someone. Drift is willing to invest SDR resources in having conversations with high-quality leads because they convert more consistently and at MRR amounts that justify the investment.

For their worst leads – that is, leads that won’t convert if engaged by sales – Drift’s messaging is tailored towards creating an account and “self-activating,” as we’ll see in future experiments.

For major sources of traffic, like their pricing page or the landing page for website visitors who click on the link inside their widget for free users, Drift iterates constantly on how to improve conversion. Some experiments work, like stripping out the noise for low-quality leads to encourage self-activation. Others, such as dropping a chatbot inside the onboarding experience for high-quality leads, don’t get as much traction despite good intentions & hope.

Intentional Onboarding

Sara Pion spent a good amount of time praising the impact of onboarding questions that allow users to self-identify intent. User inputed fields can be very tricky – mainly because users lie – but Drift has found a heavy correlation between successful onboarding and the objective that the user declared they had when signing up.

As users onboard, good & bad users are unknowingly being nudged in two very different directions. Emails, calls to action, and language for good leads are geared towards speaking with a representative. That’s because Drift knows that good users who talk with someone are more likely to convert. Bad users, meanwhile, are encouraged to self-activate. Onboarding emails encourage them to deploy Drift to their website, to use certain features, and generally to do the work themselves. Again, that’s because Drift knows that talking to these users won’t statistically help them be successful – either because they don’t actually need Drift or because they want to use Drift their own way without talking to someone.

Personalize the definition of success for every user

Like most successful SaaS companies, Drift has invested an awful lot of energy making sure that their best leads have the best possible buyer journey; however, unlike most companies, they don’t stop there. They look at how they can optimize the experience for their worst leads as well, recognizing that even a 1% increase in conversion can be the difference between hitting their revenue goals or not given the massive volume of leads they get each month.

Timing is everything: Surfacing sales-ready accounts & the right contacts to engage.

Identifying sales-ready accounts to reach out to is the heart of freemium sales acceleration. Freemium businesses rely largely on product adoption to trigger the a-ha moment that will ultimately lead to successful sales engagement, so quantifying that a-ha moment in the form of activity scoring is crucial.

For Sales SaaS reps with hundreds or thousands of accounts assigned to them, reaching out manually every three months yields little or no results. What are the odds that today is the day that an arbitrary account is ready to buy? Very little, right. Coupled with that is the fact that an account may have dozens of associated contacts to choose from and engage. Randomly picking based off of job titles and reaching out is a spray and pray strategy that yields equally unpredictable results.

Sales wants to know which accounts to reach out and who within that account to reach out. And we’ve got just the play for that.

Timing is everything: Surfacing sales-ready accounts & the right contacts to engage.

Our goal here is for sales reps to start every day with a list of accounts assigned to them that are ready to have a conversation. We’d also like to give that sales rep a filtered list of the contacts most likely to respond. That way sales reps spend all of their time crafting the most relevant message for their best leads.

This is a great play for freemium SaaS businesses with a large number of accounts where velocity & efficiency are key to success. This play is also good for products with a combination of low-touch and high-touch users – while you may have paying customers already, identifying when that paying account is sales-ready is key.

The bulk of this play is going to sit on MadKudu’s ability to build an accurate account-level behavioral scoring model. Structured data is going to be key – wecan’t build an account-level model if we don’t have account data. When we run this play with InVision, we use Segment for product data & HubSpot for marketing data. We’ll also need Salesforce for our sales data.

Once we have our data piping correctly, MadKudu is going to identify the features & activity that best predict sales readiness by looking at historical product & marketing engagement and the resulting sales outcome. Once the predictive model is build, MadKudu sucks down the latest activity data on a regular basis and looks for triggers.

When a sales-ready account is identified, MadKudu tags it in SalesForce and drops it into a daily report for sales reps. Within each sales-ready account, MadKudu’s models also look at the profile & activity of each contact in order to identify the contacts most likely to engage. Those contacts are recommended to the sales rep.

The Impact: +25% in Pipeline

InVision’s 25% increase in pipeline came from identifying accounts programmatically based on historical customer data. This came without any change to the product, just in optimizing for sales-readiness. If you’re looking at lead activity today, you may be leaving money on the table. With InVision, we identified accounts where no single lead achieved the activity threshold for an MQL, but the account as a whole hit the MQA threshold. It is the combined activity of several users that makes an account ready to talk to sales.

Once this MQA model is built, we can begin to add some of our other plays on top of it. Forcing the funnel by sales-ready accounts by customizing the app or website.  Reducing friction on forms or triggering chat for enterprise prospects with a Fastlane play. We can score accounts throughout the buyer journey.

We previously wrote about why activity scoring is so tricky and you can see slides here from a joint talk given by InVision & MadKudu at HubSpot Inbound 2018.

Activity Scoring & Enterprise Free Trials: how to do it right.

Redpoint Ventures Partner Tomasz Tunguz recently published the results of their Free Trial Survey, which included responses from 590 professionals working at freemium SaaS businesses of all sizes and shapes. The survey had many interesting takeaways and I recommend taking the time to dive into the slides that Tunguz shared at SaaStr Annual this year.

One of the more interesting takeaways (that Tunguz discussed on his blog) was that activity scoring seems to negatively impact free trial conversions for high ACV leads. Tunguz found that Enterprise SaaS businesses using activity scoring see a 4% conversion rate for free trials vs. 15% for those not using activity scoring.

MadKudu has written a lot about free trials in the past, including the article that inspired Tunguz referenced in launching his survey, so it was natural for us to weigh in on this conclusion.

I asked for a few clarifications from Tunguz in preparing this article:

  • The survey defined activity scoring as lead scoring that leverages in-app activity (not email marketing, webinar engagement or other activities).
  • The conversion rate (4%/15%) is calculated against all leads, not leads that scored well, so we’re measuring the effectiveness of the funnel not the precision of the score itself.
  • We’re only looking at leads who participate in the free trial, not leads that schedule a demo or otherwise enter the funnel.

With that in mind, I believe there are two main takeaways and some data to support those conclusions.

Enterprise leads don’t want a Free Trial.

summary: our data shows Enterprise prefer to schedule a demo and self-serve prefer a free trial (if available). Putting either in their counterpart’s funnel negatively impacts their likelihood to convert.

Free trial products design enterprise buyer journey calls-to-action – “contact sales” “schedule a demo” &  “request a quote” – in order to entice enterprise prospects. As Tunguz pointed out in the analysis of his survey, enterprise leads don’t typically try before they buy. They may sign up for a free trial to get a feel for the interface and do some preliminary feature validation, but the buying process is more sophisticated than that and lasts longer than your free trial.

One hypothesis for why activity scoring decreases conversion for enterprise leads in free trials is that enterprise leads shouldn’t be running free trials – or at least, they shouldn’t be having the same free trial experience. It is worth reading Tunguz’s piece about assisted vs. unassisted free trials to dive deeper into this subject.

Supporting this hypothesis is an experiment run by Segment & MadKudu looking at the impact of a free trials & demo requests on the likelihood that a self service & enterprise lead would convert. Segment Forced the Funnel by dynamically qualifying & segmenting website traffic, personalizing the website based on the predicted spend. This allowed us to predict whether traffic was self-serve or enterprise.

“Self-serve traffic” would not see the option to schedule a demo while “enterprise traffic” would not see the option to sign up for a trial. They also ran a control to measure the impact on the funnel.

They found a negative correlation between self-serve conversion & requesting a demo. They also found a negative correlation between enterprise conversion & signing up for a free trial.  Each buyer segment has an ideal customer journey and deviating from it (even into another buyer segment’s ideal journey) negatively impacts conversion.

The converse is equally true: pushing leads into their ideal customer journey increases their conversion rate by 30%.

Startups using activity scoring on high ACV leads should work to get high ACV leads out of their free trial by identifying them early on. Algolia, for example, prompts self-serve trial signups who have a high ACV to get in touch with someone for an assisted free trial.

Scoring activity at the account level

For SaaS businesses that go up-market or sell exclusive to enterprise, activity scoring at the lead level may not be sufficient. We worked with InVision to identify sales opportunities at the account level, importing all activity data from HubSpot & Segment and merging at the account level. We analyzed the impact that various user personas had on the buyer journey and product experience.

Profiles that were more likely to be active in the product  – Marketing, analysts & designers – had a less than average impact on the likelihood to convert. Personas associated with higher likelihood to convert – Directors, founders, CEOs – had a smaller impact on activation.

Multiple personas are needed to create optimal conditions for an account to activate & convert on InVision. Their marketing team uses this knowledge to focus on post-signup engagements that will increase the Likelihood to Buy, the behavioral score built by MadKudu.

We see similar findings in the buyer journey as we examine how various personas’ involvement in an account impacts opportunity creation vs. opportunity closed-won. Opportunities are more likely to be created when marketers & designers are involved, but they are more likely to close when CEOs & Directors get involved.

For InVision, interestingly enough, founders have a smaller impact on opportunity closed-won than they do on product conversion.

While a single lead may never surpass the activity threshold that correlated with sales readiness at InVision, scoring activity at the account level surfaced accounts that exceeded the account activity threshold. Both thresholds were defined by MadKudu & InVision using the same data sources.

The above slides are all from our HubSpont Inbound 2018 talk and are available here.

Measuring Scoring Model effectiveness

Looking at the results of experiments run with our customers and the data from Tunguz’s survey, it’s clear that activity scoring doesn’t work in a vacuum. Both our MQA model for InVision & our models for Segment require firmographic, technographic and intent data in combination with behavioral data in order to build a predictive model.

The impact that a model will have on sales efficiency & velociate depends on its ability to identify X% of leads that represent Y% of Outcomes. The power of this function increases as X tends to 0 and Y tends towards 100. “Outcomes” can represent opportunities created, opportunities won, pipeline created, or revenue, depending on the metric your sales strategy is optimizing for.

We similarly expect that the X% of leads will convert at a significantly higher percentage than lower-quality leads. As seen in the above graphic, a very good lead may be 17x as likely to convert than a low quality lead, which makes a strong case for sales teams prioritizing very good leads as defined by their predictive model – at least if they want to hit quota this quarter.

If you’re selling exclusively to enterprise leads, an assisted free trial operates a lot like a schedule a demo flow – you will score leads firmographically early on, evaluate the opportunity, and then assist them in onboarding to your product to trial it, scoring activity throughout the trial to evaluate likelihood to convert.

Most SaaS don’t sell exclusively offer free trials to high ACV leads, which is why activity scoring becomes crucial. A lead that is a great fit for self-service is also a bad fit for enterprise. Self-service leads convert quickly and consistently with a small to medium sized ACV, whereas enterprise leads have a small percentage chance of converting, but convert with a much higher ACV. Velocity sales requires a steady stream of quick conversions – a long sales cycle for low ACV is a loss – while enterprise sales can take months or years to close a single account while still being a success.

For customers with velocity & enterprise leads, MadKudu scores every lead against both models, which enables us to identify whether it’s a good fit for self-serve, enterprise, both or none (it’s almost never both).

Re-Get That Bread: Retarget qualified website traffic that just didn’t convert.

90% of your website traffic doesn’t convert, and there’s nothing worse than a missed opportunity. For b2b companies, retargeting is a no-brainer. It’s an easy way to make sure you’re always targeting an audience that has showed some intent to buy. However, the problem with retargeting anonymous website traffic broadly is that you don’t know who you are targeting and how qualified they are.

With just a high volume, many SaaS companies bid low on retargeting across their entire website traffic. They push their brand in front of their website traffic wherever they go. This spray and pray tactic means SaaS companies are only getting in front of traffic that other advertisers aren’t willing to pay more. Do you think your competitors may have a more focused strategy, outbidding you on your best leads and leaving the rest to you?

Click through rate is low because the quality filter is low. Conversion rate is low because most of your website traffic shouldn’t convert (candidates, low-quality leads, investors, analysts, perusers).

That is, of course, unless you only target leads that should convert in the first place. We already know MadKudu can handle qualifying anonymous traffic, so why not retarget it as well?

Re-Get That Bread: Identify, Qualify, Retarget

Our goal here is to focus our retargeting budget on the subsection of our website traffic that is worth the most to us. If we do that, we will be able to reallocate the budget we’re not spending on low-quality traffic to bidding more for our high-quality traffic.

We’ll need a few tools to Re-Get That Bread:

  • IP lookup: we’ll be using Clearbit Reveal for this.
  • Qualification: we’ll be using MadKudu for this.
  • Retargeting: we’ll be using Adroll for this.

As usual, we’ll be connecting this all through Segment.

Qualifying traffic has become pretty easy with the advent of IP Lookup APIs – the most popular being Clearbit Reveal. Feed Clearbit an IP address and it returns (among other things) the domain of the company or of the individual visiting your website. This is enough to score an account. We’ll be scoring with MadKudu, but you can also do it with your homegrown Lamb or Duck lead scoring model. We’ll send MadKudu the domain name provided by Clearbit, which will return a few important data points for this play:

  • Customer Fit segment: very good, good, medium, low
  • Predicted Spend: custom based on your specific pricing plans, predicting which
  • Topical Segmentation: custom based on your target segments (e.g for Algolia: ecommerce, media, SaaS).

With this data we’re able to feed AdRoll a custom audience of qualified traffic to target. This can be a bit tricky since AdRoll requires a static audience, but a quick script to update a static audience on a daily basis will get us around that hiccough.

Based on predicted spend, we can even build separate audiences for our various plans, each with different budgets. If we add in Topical Segmentation, we can run targeted messaging to our various ICPs based on their needs at various price points. If we know the predicted value of the qualified traffic, we can calculate our maximum budget as a function of our acceptable CAC.

The Impact: +300% click-through rate

When Chris Rodriguez at Gliffy first began building this play, he was looking to get click through rate for his retargeting ads under control. When he saw it jump from the .7% industry average to 2-3% for qualified traffic, it became pretty clear that qualified traffic was worth the focus.

Bidding higher on a qualified audience is a no-brainer: we see it on ad networks that boast a qualified audience or a qualified system of manual segmentation. It only makes sense that we would apply the same logic to how we retarget our own audience: we want to spend more on the audience that matters, the ones that got away.