Use predictive analytics to reduce churn by 20% in 2 days – with 3rd-grade math

Most SaaS companies have 3 misconceptions about churn:

  1. They don’t realize how much churn is costing them.
  2. They think they know why customers churn.
  3. They think predicting churn with data is too hard.

If you’re not using predictive analytics to prevent churn this hack will help reduce your churn by about 20%. It takes about 2 days of work over a few weeks and you can do it in Microsoft Excel.

We used similar techniques to help Codeship retain 72% of their at-risk users.


Download the spreadsheet to follow the example below.

You need to predict churn with data

Your customers cancel for lots of different reasons. Projects get scrapped. Users get stuck and bail. The key user takes a sabbatical to breed champion goldfish.

Quite often you can intervene before this happens and prevent it – but the primary predictors of churn are not always obvious.

For instance many SaaS marketers assume last_login_at > 30 days ago predicts churn. We almost always identify better predictors such as changing patterns in user behavior.

Let me re-phrase this point a little stronger:

If you’re not looking at data to predict churn you are almost definitely missing the fastest, easiest way to increase your MRR.

Why this hack is effective

You don’t need a data scientist. Or developer time.

As long as you have access to metrics in Mixpanel, Intercom, etc. even junior members of your marketing team can do it.

Credit card companies invest massively in predicting churn because slight improvements generate millions of dollars. You’re not Capital One – you’re a SaaS company. You don’t need know what “entropy” is to start predicting churn.

You don’t need need statistics

Can you add? This the only math skill you need. There is one equation but we’ve already put it into the spreadsheet for you.

If addition is too complex consider outsourcing to a 3rd-grader. They’ll work for peanuts (or at least cookies).

The results are immediately actionable

We’re going to start with the data you already have in your analytics or marketing automation platform – so you can use the results to send churn-prevention emails or generate alerts for your sales team.

Step-by-Step: find the best predictors of customer churn

Download the spreadsheet

Click here to download.

The examples are easier to understand if you spend a few minutes looking at the spreadsheet. I break down each step below.

PR Power! – our example company

I’m going to walk you through each step using examples from a fictitious SaaS startup called PR Power! we introduced in a previous post.

PR Power! helps media managers in mid-sized businesses do better PR by generating targeted media lists. Customers pay $50-$5,000/month after a free trial. Marketing Mark, the CMO, is charged with reducing monthly churn from 5% to 4%.

Step 1 – Identify predictors of churn

Try to identify predictable reasons why customers cancel.

Mark’s team spent a few hours looking at the last 20 customers who canceled and identified a few predictors. He also interviewed the sales and customer success teams about these customers.

They came up with the following events that are likely to predict why a customer cancels an account with PR Power!

Champion departs – Usually PR manager leaves the customer’s company.

Project canceled – Customer signed up for a specific PR campaign and then decides not to run the campaign.

No journalists – Customer can’t find a good journalist in PR Power! to cover a story.

Support fails – Customer contacts support a few times and the problem isn’t solved – usually indicated by support tickets open a long time.

Stale list – Customer’s media list is less useful because journalists no longer available or active.

Step 2 – Translate the churn predictors to data rules – or eliminate them

Mark’s team took these qualitative events and tried to identify existing data in Mixpanel that might predict them. 3 were straightforward 2 took a bit of investigating.

No journalists required identifying customers who had searched for journalists but didn’t add them to the media list.

Support fails was simply too hard – the support desk data on tickets isn’t in Mixpanel so they decided to skip it.

Step 3 – Count the occurrences of each predictor

Mark put the predictors at the top of his spreadsheet and identified every customer who matched a data rule yesterday.

For instance, User 80374 last_login_at > 30 days ago is TRUE so he entered a 1 for Project canceled.

Step 4 – Track every customer who churns until you hit 100

Mark adds a “Canceled?” column to the spreadsheet. Each day he identifies every customer who cancels until 100 customers cancel. This takes 2 ½ weeks.

Step 5 – Count the matching events for each predictor

Now for the 3rd-grad math …

For each predictor, count every customer where the churn predictor is TRUE and the customer canceled.


Mark starts with the Project canceled rule and counts the following

Number of times last_login > 30 days ago is TRUE and YES, the customer canceled.

For instance, customer 80374 and 89766 fit this criteria. He counts 22 instances.

Step 6 – Enter the results into the spreadsheet

Enter the total in the appropriate block of the 3×3 matrix to calculate the Prediction Score (This is implementation of the Phi coefficient).

Mark enters 22 and calculates Prediction Score for Project canceled at 0.009

Step 7 – Identify the biggest predictors of churn

Rules with the higher Prediction Score are better predictors of churn.

Mark compares the Prediction Score for each rule and sees an obvious pattern.


Two observations immediately jump out at Mark:

First, last_login_at > 30 days ago doesn’t tell him much about Project canceled. Since PR Power! has long-term customers who use the product periodically this isn’t surprising.

Second, No journalists is the clear winner. In hindsight, this makes sense – customers who try to find a journalist and can’t are getting no value from the product.

Step 8 – Take steps to prevent churn

Mark creates 2 rules in Mixpanel for the No journalists predictor.

Small accounts

When a customer has total_searches > 5 within last 30 days AND media_list_updated_at > 30 days ago Mark creates an auto-message inviting a customer to watch a webinar on “How to search for a journalist”.

Large Accounts

When a customer has total_searches > 5 within last 30 days AND media_list_updated_at > 30 days ago Mark creates an alert for the sales team to notify them about a customer at risk for churning.

An easier way – ask us to do this for you

You don’t need even need 3rd grade math.

Just take a free trial of MadKudu and let us run these calculations for you.

Cancel anytime if you don’t like it – keep whatever you learn and all the money you make from reducing your churn.


Want to learn more? Sign up for our new course.


Photo credit: Rodger Evans

How I teach SaaS marketers to accelerate deals

Forbes just released a study confirming what we’re hearing from SaaS CMOs:

78% [of B2B Marketers] see B2B marketings’ role expanding from demand generation to deal acceleration.

In SaaS companies “deal acceleration” means arming the inside sales teams with better information about customers:

  • Improving Marketing Qualified Lead (MQL) quality
  • Predicting when customers are about to churn
  • Providing sales with real-time information about what customers are doing in the product

I’m covering all topics in our new course. In this post I’ll tackle MQLs.

Is your SaaS marketing team ready for this shift?

Do you measure the quality of Marketing Qualified Leads (MQLs)?

Don’t worry, you’re not alone.

Most SaaS CMOs don’t measure and track the effectiveness of their MQLs. In this post we’ll show you how to use a single metric – the MQL Performance Score – to track MQL quality and grow your SaaS revenue.

Why you should care about MQL “quality”

When we interview our SaaS customers about their marketing and sales workflow we usually find sophisticated marketing automation systems and very basic MQL generation systems.

For instance, a SaaS marketing team may “just tag every lead in Salesforce as ‘marketing qualified’ if the trial customer finishes signing up”. We usually discover the following problems:

CMOs have no visibility into how sales uses MQLs

The CMOs don’t know if sales treats MQL differently or even uses them at all. Some sales reps don’t even know what “marketing qualified” means – much less what to do about it.

Sales believes marketing leads “don’t convert”

Sales may use MQLs in ways marketing never expected.

For instance, a rep may tag every MQL as a “Sales Accepted Lead” under an incorrect assumption that someone in marketing already reviewed them. The rep engages many leads who never buy and concludes MQLs “don’t convert”.

CMOs have no feedback loop for improving sales support

Should marketing send sales more MQLs? Fewer? Should marketing supplement Salesforce with key actions the customer took in the product? Did our latest update to the MQL scoring rules improve or reduce MQL quality?

We suggest using a single metric – the MQL Performance Score – to track MQL quality.

Your MQL Performance Score

Every day you run a set of business rules that identifies “Marketing Qualified” leads in your CRM (e.g. Nutshell, Salesforce, or Pipedrive…). Your sales team identifies those most likely to buy and close them.

Your CRM also contains many other leads – what we call “non-MQL” leads – from trial customers, third-party sources, webinars, “contact” forms, etc.

A percentage MQLs convert to paying customers and percentage non-MQLs convert to paying customers.

In high-volume SaaS companies we expect (hope?) that MQLs convert at a higher percentage – if not, something is probably wrong.

The easiest way to measure MQL performance is to calculate your MQL Performance Score:

MQL Performance Score

Here’s how you do it.

Step-by-Step: How to calculate your MQL Performance Score

If you can use Excel and know 5th-grade math you have all of the tools you need. The practical challenge is getting and cleaning up the data – especially since the data is in your CRM and not the marketing stack.

Download a copy of the spreadsheet used in this post.

Step 1 – Break your leads into cohorts

Breaking your data into cohorts helps identify trends and reduces the impact of data anomalies. We suggest starting with monthly cohorts – that is, collect all leads who signed up in a given month and track their progress through the sales funnel over the next several months.

For each month gather the total number of MQL and non-MQL leads. Set up your spreadsheet as follows:


In October 17,000 new leads were added to Salesforce. We broke them into 2,000 MQL leads and 15,000 non-MQL leads which we entered into Column C.

Step 2 – Count the leads in each sales workflow step

Create a column for each step in your sales workflow and plug in the number of leads.

Step 2

(click the image above to see a bigger one or download a copy)

Since your workflow is probably different I’ll walk through each column during October 2015 for the MQLs.

In October of 2015 2,000 MQLs were added to the CRM. Sales accepted (SALs) 440 of these leads (Column E). Sales contacted 396 (Column H) of these leads and 71 of them responded (Column K). Sales qualified (SQL) 66 (Column N) as likely buyers and 46 (Column Q) bought the product.

Step 3 – Calculate the percentage that converts in each step

Calculate the conversion rates for each column you created in Step 2.

Step 3

In October 22% (Column F) of MQLs were accepted by Sales. We calculated by dividing SAL count (Column E) by new MQLs (Column C).

Calculate this conversion percentage for Columns I, L, and O.

Step 4 – Calculate the MQL and non-MQL conversion percentage

Calculate the percentage of MQLs and non-MQLs that convert into paying customers.

Step 4

(Columns E-P are hidden)

In October 2.3% (Column R) of MQLs converted to paying customers (Column Q/Column C).

Step 5 – Calculate the MQL Performance Score for each cohort

Now calculate how much better MQLs performed relative to non-MQLs for each cohort.

Step 5

In October an MQL was 3.8 (Column T) times likely to convert than an non-MQL (2.3% / .6%)

How to use your MQL Performance Score

Getting insight into how sales uses MQLs

Looking at our complete spreadsheet above already raises some questions.


What happened in December? Did the sales and marketing team drink too much egg nog at the Holiday party? Marketing only generated 400 MQLs and sales only accepted 300 non-MQLs. This looks suspiciously like a data problem.

Did November provide an example of how we can grow faster? It looks like the sales team paid more attention to MQLs in November. A higher percentage were accepted, contacted, and converted. Did we run a unique campaign? Did a particular sales rep choose to focus on MQLs? Further investigation is needed.

Measuring the impact of changes

Tracking MQL Performance Score allows you to systematically test and measure changes to your campaigns, products, and scoring rules.

Benchmarking your SaaS marketing team against competitors

Unfortunately we don’t yet have enough data to give you a good benchmark – obviously there are tons of variables. An expensive, enterprise SaaS product will have a lower MQL Performance Score than one that sells for $10/month.

For our high-volume SaaS customers we are seeing MQL Performance Scores of 3-6.

And … last but definitely not least … evaluating how much more $$$$$ MadKudu is making for you

Seriously – just sign up for a free trial of MadKudu – we’ll calculate your MQL Performance Score and show you how to improve it.

You have absolutely nothing to lose. You won’t have to pay us a dime until we prove how much more we can grow your SaaS revenue.


Want to learn more? Sign up for our new course.

How SaaS CMOs use customer personas to generate better sales leads

Here’s a quiz.

The top challenge facing SaaS CMOs is …

A. Improving marketing automation.
B. Finding more leads.
C. Generating more consistent Marketing Qualified Leads (MQLs) for inside sales.

If you read marketing blogs you probably think ‘A‘ is correct. But if you work with SaaS marketing teams you’ll quickly discover that for most of them ‘C‘ is the biggest challenge. The real work starts rather than it ends once you’ve generated leads.

Why? Just follow the money. Most SaaS companies are trying to close bigger accounts. Marketing automation is great for incrementally improving revenue, but winning major accounts still takes sales. Under pressure to close bigger deals, the sales teams are demanding more consistent, higher-quality MQLs from the CMO.

So how can CMOs generate better MQLs? Well, we answer that exhaustively in our new course.

In this post we cover one part of the answer – how to improve the feedback and communication between sales and marketing teams using Customer Personas. We’ll walk you through a step-by-step example using a fictitious company called PR Power!

Meet PR Power!

PR Power! helps media managers in mid-sized businesses do better PR by generating targeted media lists. Customers pay $50-$5,000/month after a free trial.

CMO Marketing Mark has been building the company’s marketing funnel and automation for 2 years. VP of Sales Selling Sandra just started building the inside sales team and asked Marketing Mark to post qualified leads into Salesforce.

Marketing begins qualifying leads for sales

Marketing Mark and Selling Sandra came up with a workflow which can be simplified as:

Marketing Mark agreed to identify the most promising trial customers (MQLs) and to pass them along to Selling Sandra’s inside sales team. Sales agreed to review the leads and accept (SALs) those most likely to buy.

Marketing Mark’s team spent months developing the business logic to support this process. They added simple scoring rules such as “disqualify any students who sign up with a .edu email address”. After a lot of late nights they got the MQL generation process going.

It was a wonderful plan … until …

For the first few months everything worked as planned. Selling Sandra’s team started engaging the leads and paid conversions grew by 30%. Yipeeeee!

Then reality hit…

…the CEO decided to focus the company on bigger enterprise deals…
……the product changed to support larger customers…
………Marketing Mark’s team struggled to keep scoring rules updated…
…………and Selling Sandra (under pressure) started generating her own leads.

One day Marketing Mark realizes he’s investing a ton of resources generating MQLs that sales teams don’t even use. He doesn’t know why.

The Meeting: “Why isn’t sales using the leads from marketing??”

Marketing Mark calls a meeting with sales to discuss.

Marketing Mark: “I know inside sales is under a lot of pressure to grow revenue. We want to do our part. Last month we sent you 300 MQLs and you only accepted 3 as SALs. Why?”

Selling Sandra: “Wow, 3? That’s 3 more than I expected. Your leads suck and I don’t want my team to waste time calling them.”

Marketing Mark: “Ok … I need a little bit more feedback than ‘sucks’. Believe or not we don’t have a ‘suckiness’ customer attribute in our database.”

Selling Sandra: “Last month I called one of the higher scored MQLs you sent me. I spent 3 hours playing phone tag with some guy who turned out to be a student doing a class project. That’s what I mean by ‘sucks’.”

Marketing Mark thanks everyone for their time and promises to explore the issue further.

If only we had a “suckiness” customer attribute…

Marketing retraced Selling Sandra’s ‘student’ lead and discovered the trial customer wasn’t using a .edu email address – so the lead wasn’t scored as a student and became an MQL.

Unfortunately the marketing teams feels like they’re being blamed. As a joke someone writes on the whiteboard:


Since there is no “suckiness” attribute the team has to figure out ways to identify the attributes and behaviors of poor leads.

This is really hard without good collaboration & feedback from sales. This situation is so common because marketing and sales people think about customers differently.

Marketing thinks data. Sales thinks people.

SaaS marketers think in terms of events, attributes, cohorts. Sales teams think in terms of people. They think about customers differently and use different language.

Unfortunately this difference can cause the problems like those at PR Power!: Sales has no tools for providing feedback on MQL quality in a way that marketing can translate to data and business logic.

Customer personas are one tool for solving this problem.

Customer personas create a common language for sales and marketing

Personas describe a customer in a way that can be mapped to attributes and behaviors. They are simple to make, easy to understand, and easily changed.

Here is Customer Persona template we created for use in SaaS sales funnels:


Step-by-Step Example of using personas to generate better mqls

1. identify leads Sales doesn’t accept and group them into common archetypes

Why is Selling Sandra so unhappy with the MQLs? Because top-performing sales people are busy and want fewer, high-quality leads. The fastest way to improve MQL consistency is to eliminate poor leads.

Marketing Mark interviewed sales reps and learned that sales didn’t want to waste time talking to students, startups or freelancers since these customers – although very active – were unlikely to become larger accounts.

2. Generate personas for each group of leads.

Mark created customer personas for students, startups, and freelancers based on customer attributes and behaviors. Using our template above, here is the “Student Sammy” customer persona:



  • Keep them simple.
  • Silly, descriptive names are easier to remember. e.g. “Student Sammy”, “Startup Steve”, “Freelancer Freddie”.
  • Don’t go for perfect.

3. Update MQL generation rules based on the personas

Instead of using a simplistic business rule like “disqualify leads with .edu addresses”, Mark refines his business logic based on all of Student Sammy’s behaviors and attributes.

4. Ask sales reps for immediate feedback using the personas

Marketing Mark asks all of the reps to”let me know if we send you any Student Sammy’s”. Now Mark is in a position to get contextual feedback.


  • Inside sales reps often have to be prompted for feedback.
  • Post the customer personas on a wall where sales reps can see them. Funny pictures help.

5. Refine and update the personas over time

Fast-growing SaaS teams update their personas every 6-10 weeks. Often they are too general and need to be sub-divided.

When sales reps wanted to contact graduate students Mark split the Student Sammy into “Undergrad Ulf” and “MBA Mickey”.

Want to learn more? Let’s talk!

Personals are just the 1st step in creating high-quality MQLs. The CMO’s marketing team will need to analyze data and develop predictive models for attributes and behaviors that map onto the customer personas.

Lucky for you … we’re here to help. Sign up here and you’ll be on your way for turning the VP of sales into your best friend.


Photo credit:Gabriel Cabral

Who owns SaaS trial conversions?


Letting customers try your product before buying is becoming a standard practice.

Free trials are now more and more common. For example, we analyzed a sample of 41 Techstars SaaS companies and found that 77% of those companies offered a free trial.

Well known B2B SaaS companies like Salesforce, Zendesk, LinkedIn, and HubSpot work with this model and are defining customer expectations in the B2B world.

Free trials are popular for a reason. They are a great sales tool. They allow you to “soft sell”. They make the ask smaller. They reduce the perceived risk in the purchase decision. They are similar to the free return policy now offered by almost every retail store.

Many companies do quite poorly at optimizing free trial conversions.

We’ve always been surprised by the amount of effort put into adding more leads to the top of the funnel in comparison to how much is done to convert those leads.

Many of the companies we work with have trial conversion rates ranging from 1 to 15%. In other words, 85 to 99% of acquired leads go down the sink. Even if you assumed that 70% of those signups are not potential customers, it still leaves lots of room for improvement.

Increasing a trial conversion rate from 3% to 4% means reducing customer acquisition cost by 25%!

The free trial stage is the most complex of the customer journey.

Most stages of the customer journey have one clear owner. For example, marketing is responsible for bringing traffic to the website and converting this traffic into leads.

The free trial stage is more complex. It involves almost every department:

  • Marketing: set up email drip campaigns to guide and convert trial users, experiment with discounts and pricing.
  • Customer Success: onboard customers and coach larger accounts to become successful.
  • Sales: explain the value prop, give demos, help customers pick the right plan, negotiate contracts.
  • Product: identify friction in the product, improve product user experience, add missing features.

The lack of a dedicated owner results in sub-optimal trial conversion rate

Are you familiar with the business fable “the chicken and the pig“?

The trial stage often has lots of chickens but it rarely has an assigned pig.

Everyone has a critical role to play there but the contributions are usually tactical and of limited impact.

The best companies I have worked with assign a strong owner dedicated to optimizing this stage the same way, let’s say, a website is optimized: gather data, make hypotheses, test, learn, implement, iterate.

“Okay, okay… who should own trial conversations then?”

I have seen different configurations. Most of them depend on the type of SaaS businesses: high-volume versus  high-touch.

In high volume SaaS companies, the CMO usually owns trial conversions. Or more accurately, marketing owns the conversion of self-service leads (usually defined as signups from companies likely to buy a small plan) while sales owns the conversion of enterprise leads (signups from large companies). Marketing works closely with product to test different discounts, pricing, and plans. And they work with customer success to implement effective email drip campaigns.

In high-touch SaaS companies, the sales team assisted by the customer success team tends to own trial conversions.

Having a CRO (Chief Revenue Officer) is a new and growing trend, particularly in the Silicon Valley. The CRO oversees and “optimizes the entire customer experience with the aim of increasing revenue”.

Let me ask you the question now. In your SaaS organization, who owns trial conversions? How is it working? Share your feedback and thoughts on twitter or email!