Account-Based Engagement and the Fallacy of Job Titles

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations, such as Account-Based Engagement. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

As we saw recently with the Sales SLA, the path to alignment often starts & ends with clear definitions of metrics. The leads marketing hands to sales need to have the same definition & measurement for success, which is where actionable lead scoring plays a key role in establishing lasting alignment.

If we step back from Sales & Marketing and look at aligning each department to business objectives, we can see that metric disjunction can result in each individual team being successful while ultimately failing to create a relevant customer journey at scale.

The fallacy of job titles

One area where we often observe this is when we run funnel analysis by customer fit and look at job titles as predictors of activation and conversion. On self-serve tools such as API-based products, we often see that someone with a developer title is more likely to activate but very unlikely to convert (that is, to hand over the credit card), whereas someone with a CEO/owner title is more likely to give a credit card, but less likely to convert.

One analysis we recently ran for a customer demonstrated that perfect:

How job title affects conversion | Account-Based Engagement

  • Developers convert 60% less than the average user
  • Founders, CEOs & marketing convert 70-80% than the average user.

When we look at conversion & activation side-by-side for this same customer, the number speak for themselves:

Conversion vs. Activation | Account-Based Engagement

  • Founders/CEOs don’t use the software that much but end up converting higly
  • Product & Project managers have a higher activation but lower conversion rate

Product teams are historically motivated by increasing activation by building an increasingly engaging product; however, a developer is unlikely to respond to marketing’s nurturing emails or jump on a first sales call no matter how active they are on the product.

Likewise with more sales-driven products like enterprise software, SDRs are often singularly focused on the number of meetings they can generate for their AEs; however, low-level team members are significantly more likely to jump on a phone call and significantly less likely to convert as compared to their director counterpart.

In both of these instances, we see that product & sales development are able to optimize for their metric without accomplishing the core business objective of creating a great customer journey.

How Account-Based Engagement changes the rules

What this comes back to is account-based engagement, a nascent terminology in the marketing space stemming from the principal of account-based marketing but extending it across the entire customer journey and to all customer-facing teams. Where account-based marketing encourages running campaigns to generate interest not at the individual lead level but the account level – especially important when you have multiple stakeholders in the decision-making process – account-based engagement extends that to all teams, meaning that:

  • Product teams should seek not only to make as many active users as possible, but to create active accounts: building features that encourage getting other stakeholders involved or making it easy for your hero to evangelize your product value to other stakeholders.
  • Marketing teams should not seek to generate marketing qualified leads but marketing qualified accounts, including nurturing existing accounts in order to get other stakeholders involved so as to set sales up for success
  • SDRs should not seek to generate meetings at the account level, not at the lead level, and shouldn’t be working on accounts where the necessary stakeholders are not already involved.

Account-Based Engagement | Identifying hidden opportunities

We’ve been recently working with two of our bigger customers who have a prosumer user base to identify marketing-qualified accounts that aren’t getting attention. We do this by looking not only at customer-fit at the account level – does the account look like the type of accounts that typically convert when sales engages – but also at behavioral-fit: are they engaging with the product the way paying customers typically do?

Sales reps who are qualifying leads as soon as the account is created aren’t going to be able to sift through the hundreds of warm accounts to identify which accounts have engaged properly (and been properly engaged) to be sales-ready; however, this is core to Account-Based Engagement. Just as our Sales SLA gives a common metric for marketing & sales to work towards, so Product, Customer Success, Sales & Marketing all need to have a common qualification criteria for an account in order to be aligned on how best to achieve business goals.

Remember: In B2B, you’re not selling to users, you’re selling to Accounts

The goal is not to reduce all teams to a single metric like revenue-generated, but rather to help reduce the natural tendency to game a metric by linking a common thread between the metrics that we use to measure success. That thread is Accounts.

It is all too easy to lose track of the fact that selling B2B software means that a company is going to buy your software, not a person. There are users, decision-makers, stakeholders and other advisors in the buying process, but at the end of the day a company is going to make a decision about whether to pay another company for their solutions. In this respect, every team should be focused on how to acquire, activate, convert & retain accounts, because at the end of the day it is not a user that will churn but an account.

 

 

Sales SLA: how accountability fosters sales & marketing alignment

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

I first heard about a Sales SLA in my first week after joining MadKudu. I was familiar with a Service Level Agreement (SLA) – a commitment from the engineering team around reliability with varying repercussions if we violated the SLA – but I had never interacted with a Sales SLA, despite being in marketing.

When one of our customers is having trouble with hitting revenue goals, the Sales SLA is almost always where we start, so let’s start there.

Sales SLA: A contract between Sales & Marketing

A Sales SLA is an agreement between marketing & sales whereby:

Marketing commits to generate N Very Qualified Leads per quarter, and

Sales commits to reach out to 99% of those leads within H hours, and to contact them at least N times in the first D days

Most marketing teams have a quarterly lead generation goal. A Sales SLA doesn’t measure MQLs or SQLs – it measures Very Qualified Leads: MQLs with the potential to become customers. Marketing agrees to create enough expected revenue, and Sales agrees to convert it into the revenue target.

“The only people who create value out of nothing is Marketing. The role of sales is to keep the value of those leads constant until they close.”

Marketing not only needs to generate increasing amounts of value but to be able to measure its potential to become revenue.

Sales needs to reach out quickly and to continue to connect with that lead enough to feel like everything possible was tried. A typical adage is “8 times in 15 days”, but again, this varies for each customer journey.

Each variable of a Sales SLA comes with its own questions: what makes a lead very qualified? How many touch points and how quickly should a lead be reached out to? Should it vary based on lead source?

“Do we need a Lead Score?”

The Sales SLA requires scoring each lead as they sign up. Many early-stage SaaS companies wonder how they are supposed to have a Sales SLA from day one without having a lead score.

Let’s put it out there: everyone has a lead score.

Filtering spam at signup is scoring. Escalating fortune 100 companies at sign up is scoring. While simple, it allows you to begin defining lead quality by answering “who do you want to ignore and who do you want to talk to?”

Since everyone has a lead score, everyone therefore should have a Sales SLA. The earliest iteration can be simple: “If someone signs up through a demo form, you need to follow up faster than if they sign up for a trial.” Putting something simple in place is better than nothing at all.

Implementing a Sales SLA

The tactical owner of a Sales SLA will almost always be Sales Operations, because they are ultimately the ones managing SDR workflows today. Marketing tends to ask for a Sales SLA. It ends the cycle of sales bemoaning lead quality and marketing bemoaning sales conversion rates. The Sales SLA will move that existential, emotional debate to a practical, data-driven report.

In order for you to be able to maintain a Sales SLA properly, you’ll need to be able to track all outbound communication inside your CRM. If you’re using third party emailing tools, every email you send out needs to be tied to a lead as an activity. Otherwise you’ll get false positives or adjust your sales SLA based on current activity metrics.

Contract & Education

A Sales SLA doesn’t have to be written; however, in practice, a written agreement can be useful for onboarding new SDRs. Every new SDR should know what their team expects of them from day one. And every SDR should know what happens if they don’t respect it.

When the Sales SLA is broken, some organizations choose to put leads back into round robin. Others send it to a marketing nurturing funnel, or escalate it to a manager. How you implement the Sales SLA is up to you, as long as you’re tracking the metrics necessary to uphold it.

Once your Sales SLA is in place, much like a infrastructure monitoring tool, you should be able to detect outlier scenarios more quickly. SDRs may be on vacation or no longer with the company and still get leads routed to them. Certain campaign leads may get bulk routed to an old admin account. Or new team members may get routed leads before they’ve learned about the Sales SLA. None of these problems are anyone’s “fault,” but they need to be noticed & dealt with quickly.

Procrastination in Hyper-Growth

Sales SLAs can look daunting on paper, especially if you’re still in the early days of building your sales organization. At it’s core, a Sales SLA defines the handoff between marketing & sales. At MadKudu, for example, the handoff happens at signup today. Sales handles everything after lead generation, because we don’t yet have a need to automate that part of the funnel. We have a number of indicators (company size, technologies used, etc.) that we know correspond closely to someone needing MadKudu. This allows us to be pretty explicit about what makes a lead Very Qualified.

“People don’t put SLAs in place because they want to avoid having tough conversations”

Creating a Sales SLA is going to shed a spotlight on all the cracks in your sales funnel, especially when you’ve been in dealing with hyper-growth recruiting. If leads aren’t getting followed up on, you’re going to have to look at what the cause is. Are you understaffed? Are you not scoring/routing/prioritizing properly? Or are your Sales reps not reacting quickly enough?

When a Sales SLA is breached, it’s a symptom of a bigger problem, and usually no single person is at fault. Without a Sales SLA, it’s easy to overlook one of your sales reps not following up, or low-quality leads getting faster follow-up than high-quality leads. 

Start the discussion around Sales SLAs early and you’ll address problems that won’t go away unless you shed light on them.

How MadKudu makes Salesforce Einstein better

…Or why Salesforce Einstein won’t be the next IBM Watson.

Is the AI hype starting to wither? I believe so, yes.
Reality of the operational world is slowly but steadily catching up with the idyllic marketing fantasy. The report Jeffries put together challenging IBM Watson proves alarm bells are ringing. The debacle of the Anderson implementation goes to show how marketing promises can be unrealistic and the downfall will be dreadful.

With that said, not all is lost as we keep learning from our past mistakes. Being part of the Salesforce Einstein focused incubator, we are witnessing first-hand how the CRM giant is looking to succeed where Watson and others are struggling. Hopefully these insights can help others rethink their go-to-market strategy, in an era of unkept commitments.

Salesforce, a quick refresher

A few weeks ago, I was being interviewed for an internal Salesforce video. The question was “how has the Salesforce eco-system helped your startup?”. To contextualize my thoughts it’s important to know that while Salesforce is one of our main integrations, we consider it as an execution platform among others (Segment, Marketo, Intercom, Eloqua…). I’ve always admired Salesforce for its “platform” business model. Being part of the Salesforce ecosystem facilitated our GTM. It gave MadKudu access to a large pool of educated prospects.

However I believe the major value add for startups is the focus induced by working with Salesforce customers. Since Salesforce is a great execution platform there are a plethora of applications available addressing specific needs. This means, as a startup, you can focus on a clearly defined and well delimited value proposition. You can rely on other solutions to solve for peripheral needs. As David Cohen reminded us during our first week at Techstars, “startups don’t starve, they drown”. Salesforce has helped us stay afloat and navigate its large customer base.

What is Salesforce Einstein?

I’m personally very excited about Salesforce Einstein. For the past 5 years, I’ve seen Machine Learning be further commoditized by products such as Microsoft Azure, Prediction.io… We’ve had many investors ask us what our moat was given this rapid democratization of ML capabilities and our answer has been the same all along. In B2B Sales/Marketing software, pure Machine Learning should not be considered a competitive advantage mainly because there are too few data sets available that require non-generic algorithms. The true moat doesn’t reside in the algorithms but rather in all the aspects surrounding them: feature generation, technical operationalization, prediction serving, latency optimization, business operationalization… The last one being the hardest yet the most valuable (hence the one we are tackling at MadKudu…).
Salesforce Einstein is the incarnation that innovation will be in those areas since anyone can now run ML models with their CRM.

We’ve been here before

Just a reminder, this is not a new thing. We’ve been through this not so long ago.
Remember the days when “Big Data” was still making most of the headlines on Techcrunch? Oh how those were simpler times…


Big Data vs Artificial Intellligence search trends over the past 5 years

There were some major misconceptions as to what truly defined Big Data especially within the context of the Enterprise. The media primarily focused on our favorite behemoths: google, facebook, twitter and their scalling troubles. Big data became synonymous for Petabytes and unfathomly large volumes of data more generally. However scholars defined a classification that qualified data as “big” for 3 reasons:
– volume: massive amounts of data that required distributed systems from storage to processing
– velocity: quickly changing data sets such as product browsing. This meant offline/batch processing needed an alternative
– variety: data originating from disparate sources meant complex ERDs had to be maintained

In the Enterprise, volume was rarely the primary struggle. Velocity posed a few issues to large retailers and companies like RichRelevance nailed the execution of their solution. But the main and most challenging data issue faced was with the variety of data.

What will make Salesforce Einstein succeed

Einstein will enable startups to provide value to the Enterprise by focusing on the challenges of:
– feeding the right data to the platform
– defining a business playbook of ways to generate $$ out of model predictions

We’ll keep the second point for a later blog post but to illustrate the first point with DATA, I put together an experiment. I took one of our customers’ dataset of leads and opportunities.
The goal was to evaluate different ways of building a lead scoring model. The objective was to identify patterns within the leads that indicated a high likelihood of it converting to an opportunity. This is a B2B SaaS company selling to other B2B companies with a $30k ACV.
I ran an out-of-the-box logistic regression on top of the usual suspects: company size, industry, geography and alexa rank. For good measure we had a fancy tech count feature which looked at the amount of technologies that could be found on the lead’s website. With about 500 opportunities to work on, there was a clear worry about overfitting with more features. This is especially true since we had to dummy the categorical variables.
Here’s how the regression performed on the training data (70% of the dataset) vs the test dataset (30% that were not used for training and ensuring if a company is part of the training it is not part of testing – see we did not fool around with this test)

Regression model using firmo and technographic featuresmodel performance on test dataset using available data points

Not bad right?! There is a clear overfitting issue but the performance is not dreadful apart for a blob in the center

Now we ran the same logistic regression against 2 feature: predicted number of tracked users (which we know to be highly tied to the value of the product) and predicted revenue. These features are the result of predictive models that we run against a much larger data set and take into account firmographics (Alexa rank, business model, company size, market segment, industry…) along with technographics (types of technologies used, number of enterprise technologies…) and custom data points. Here’s how the regression performed:

Regression with MadKudu featuresmodel performance on test dataset using 2 MadKudu features

Quite impressive to see how much better the model performs with fewer features. At the same time, we are less running the risk of overfitting as you can see.

The TL;DR is that no amount of algorithmic brute force applied to these B2B data sets will ever make up for appropriate data preparation.

In essence, Salesforce is outsourcing the data science part of building AI driven sales models to startups who will specialize in verticals and/or use-cases. MadKudu is a perfect illustration of this trend. The expert knowledge we’ve accumulated by working with hundreds of B2B SaaS companies is what has enabled us to define these smart features that make lead scoring implementations successful.

So there you have it, MadKudu needs Salesforce to focus on its core value and Salesforce needs MadKudu to make its customers and therefore Einstein successful. That’s the beauty of a platform business model.
I also strongly believe that in the near future there will be a strong need for a “Training dataset” marketplace. As more of the platforms make ML/AI functionalities available, being able to train them out-of-the-box will become an important problem to solve. These “training datasets” will contain a lot of expert knowledge and be the results of heavy data lifting.

Feel free to reach out to learn more

Images:
www.salesforce.com
Google trends
MadKudu demo Jam 3

PS: To be perfectly clear, we are not dissing on IBM’s technology which is state of the art. We are arguing that out-of-the-box AI have been overhyped in the Enterprise and that project implementation costs have been underestimated due to a lack of transparence on the complexity of configuring such platforms.

Are Automation and AI BS?

A couple weeks ago, I ended up taking Steli’s click bait and read his thoughts on sales automation and AI. There isn’t much novelty in the comments nor objections presented. However I felt compelled to write a answer. Part of the reason why, is that MadKudu is currently being incubated by Salesforce as part of the Einstein batch. Needless to say the word AI is uttered every day to a point of exhaustion.

The mythical AI (aka what AI is not today)

The main concern I have around AI is that people are being confused by all the PR and marketing thrown around major projects like Salesforce’s Einstein, IBM’s Watson and others – think Infosys Nia, Tata Ignio, Maana.io the list goes on.

Two months ago, at the start of the incubator, we were given a truly inspiring demo of Salesforce’s new platform. The use-case presented was to help a solar panel vendor identify the right B2C leads to reach out to. A fairly vanilla lead scoring exercise. We watched in awe how the CRM was fed google street view images of houses based on the leads’ addresses before being processed through a “sophisticated” neural network to determine if the roof was slanted or not. Knowing if the roof was slanted was a key predictor of the amount of energy the panels could deliver. #DeepLearning

This reminded me of a use-case we discussed with Segment’s Guillaume Cabane. The growth-hack was to send addresses of VIP customers through Amazon’s mechanical turk to determine which houses had a pool in order to send a targeted catalogue about pool furniture. Brilliant! And now this can all be orchestrated within the comfort of our CRM. Holy Moly! as my cofounder Sam would say.

To infinity and beyond, right?

Well not really, the cold truth is this could have also been implemented in excel. Jonathan Serfaty, a former colleague of mine, for example wrote a play-by-play NFL prediction algorithm entirely in VBA. The hard part is not running a supervised model, it’s the numerous iterations to explore the unknowns of the problem to determine which data set to present the model.

The pragmatic AI (aka how to get value from AI)

Aside from the complexity of knowing how to configure your supervised model, there is a more fundamental question to always answer when considering AI. This foundational question is the purpose of the endeavor. What are you trying to accomplish with AI and/or automation? Amongst all of the imperfections in your business processes which one is the best candidate to address?

Looking through history to find patterns, it appears that the obvious candidates for automation/AI are high cost, low leverage tasks. This is a point Steli and I are in agreement on: “AI should not be used to increase efficiency”. Much ink has been spilled over the search for efficiency. Henry Ward’s eShares 101 is an overall amazing read and highly relevant. One of the topics that strongly resonated with me was the illustrated difference between optimizing for efficiency vs leverage.

With that in mind, here are some examples of tasks that are perfect fits for AI in Sales:

  • Researching and qualifying
  • Email response classification (interested, not interested, not now…)
  • Email sentiment classification
  • Email follow up (to an email that had some valuable content in the first place)
  • Intent prediction
  • Forecasting
  • Demo customization to the prospect
  • Sales call reviews

So Steli is right: No, a bot will not close a deal for you but it can tell you who to reach out to, how, why and when. This way you can use your time on tasks where you have the highest leverage: interacting with valuable prospects and helping them throughout the purchase cycle. While the recent advent of sales automation has led to an outcry against the weak/gimmicky personalization I strongly believe we are witnessing the early signs of AI being used to bring back the human aspect of selling.

Closing thoughts

AI, Big Data, Data Science, Machine Learning… have become ubiquitous in B2B. It is therefore our duty as professionals to educate ourself as to what is really going on. These domains are nascent and highly technical but we need to maintain an uncompromising focus on the business value any implementation could yield.

Want to learn more or discuss how AI can actually help your business? Feel free to contact us

3 steps to determine the key activation event

Most people by now have heard of the “Product key activation event”. More generally, Facebook’s 7 friends in the first 10 days, Twitter’s 30 followers… get lots of mentions in the Product and Growth communities. Theses examples have helped cement the idea of statistically determining goals for the onboarding of new users. A few weeks ago, somebody from the Reforge network asked how to actually define this goal and I felt compelled to dive deeper into the matter.

I love this topic and while there’s already been some solid answers on Quora by the likes of Uber’s Andrew Chen, AppCues’ Ty Magnin and while I have already written about how this overarching concept a couple weeks ago (here) I wanted to address a few additional/tactical details.

Below are the three steps to identify your product’s “key activation event”.

Step 1: Map your events against the Activation/Engagement/Delight framework

This is done by plotting the impact on conversion of performing and not performing an event in the first 30 days. This is the core of the content we addressed in our previous post.

To simplify, I will call “conversion” the ultimate event you are trying to optimize for. Agreeing on this metric in the first place can be a challenge of itself…

Step 2: Find the “optimal” number of occurrences for each event

For each event, you’ll want to understand what is the required occurrence threshold (aka how many occurrences maximize my chances of success without hitting diminishing returns). This is NOT done with a typical logistic regression even though many people try and believe so. I’ll share a concrete example to show why.

Let’s look at the typical impact on conversion of performing an event Y times (or not) within the first X days:

There are 2 learnings we can extract from this analysis:
– the more the event is performed, the more likely to convert the users are (Eureka right?!)
– the higher the threshold of number of occurrences to perform, the closer the conversion rate of people who didn’t reach it is to the average conversion rate (this is the important part)

We therefore need a better way to correlate occurrences and conversion. This is where the Phi coefficient comes into play to shine!

Below is a quick set of Venn diagrams to illustrate what the Phi coefficient represents:

Using the Phi coefficient, we can find the number of occurrences that maximizes the difference in outcome thus maximizing the correlation strength:

Step 3: Find the event for which “optimal” number of occurrences has the highest correlation strength

Now that we have our ideal number of occurrences within a time frame for each event, we can rank events by their highest correlation strength. This will give us for each time frame considered, the “key activation event”.

Closing Notes:

Because Data Science and Machine Learning are so sexy today, everyone wants to run regression modeling. Regression analyses are simple, interesting and fun. However they lead to suboptimal results as they maximize for likelihood of the outcome rather than correlation strength.

Unfortunately, this is not necessarily a native capability with most analytics solutions but you can easily dump all of your data in redshift and run an analysis to mimic this approach. Alternatively, you can create funnels in Amplitude and feed the data into a spreadsheet to run the required cross-funnel calculations. Finally you can always reach out to us.

Don’t be dogmatic! The results of these analyses are guidelines and it is more important to pick one metric to move otherwise you might spiral down into an analysis-paralysis state.

Analysis << Action
Remember, an analysis only exists to drive action. Ensure that the events you push through the analysis are actionable (don’t run this with “email opened”-type of events). You should always spend at least 10x more time on setting up the execution part of this “key activation event” than on the analysis itself. As a reminder, here are a couple “campaigns” you can derive from your analysis:

  • Create a behavioral onboarding drip (case study)
  • Close more delighted users by promoting your premium features
  • Close more delighted users by sending them winback campaigns after their trial (50% of SaaS conversions happen after the end of the trial)
  • Adapt your sales messaging to properly align with the user’s stage in the lifecycle and truly be helpful

Images:
– MadKudu Grader (2015)
– MadKudu “Happy Path” Analysis Demo Sample

The “Lean Startup” is killing growth experiments

Over the past few years, I’ve seen the “Lean Startup” grow to biblical proportions in Silicon Valley. It has introduced a lot of clever concepts that challenged the old way of doing business. Even Enterprises such as GE, Intuit and Samsung are adopting the “minimum viable product” and “pivoting” methodologies to operate like high-growth startups. However just like any dogma, the “lean startup” when followed with blind faith leads to a form of obscurantism that can wreck havoc.

Understanding “activation energy”

A few weeks ago, I was discussing implementing a growth experiment with Guillaume Cabane, Segment’s VP of Growth. He wanted to be able to pro-actively start a chat with Segment’s website visitors. We were discussing what the MVP for the scope of the experiment should be.

I like to think of growth experiments as chemical reactions, in particular when it comes to the activation energy. The activation energy is commonly used to describe the minimum energy required to start a chemical reaction.

The height of the “potential barrier”, is the minimum amount to get the reaction to its next stable state.

In Growth, the MVP should always be defined to ensure the reactants can hit their next state. This requires some planning which at this stage sounds like the exact opposite of the Lean Startup’s preaching: “ship it, fix it”.

The ol’ and the new way of doing

Before Eric Ries’s best seller, the decades-old formula was to write a business plan, pitch it to investors/stakeholders, allocate resources, build a product, and try as hard as humanly possible to have it work. His new methodology prioritized experimentation over elaborate planning, customer exposure/feedback over intuition, and iterations over traditional “big design up front” development. The benefits of the framework are obvious:
– products are not built in a vacuum but rather exposed to customer feedback early in the development cycle
– time to shipping is low and the business model canvas provides a quick way to summarize hypotheses to be tested

However the fallacy that runs rampant nowadays is that under the pretense of swiftly shipping MVPs, we reduce the scope of experiments to the point where they can no longer reach the “potential barrier”. Experiments fail and growth teams get slowly stripped of resources (this will be the subject for another post).

Segment’s pro-active chat experiment

Guillaume is blessed with working alongside partners who are willing to be the resources ensuring his growth experiments can surpass their potential barrier.

The setup for the pro-active chat is a perfect example of the amount of planning and thinking required before jumping into implementation. At the highest level, the idea was to:
1- enrich the visitor’s IP with firmographic data through Clearbit
2- score the visitor with MadKudu
3- based on the score decide if a pro-active sales chat should be prompted

Seems pretty straightforward, right? As the adage goes “the devil is in the details” and below are a few aspects of the setup that were required to ensure the experiment could be a success:

  • Identify existing customers: the user experience would be terrible is Sales was pro-actively engaging with customers on the website as if they were leads
  • Identify active opportunities: similarly, companies that are actively in touch with Sales should not be candidates for the chat
  • Personalize the chat and make the message relevant enough that responding is truly appealing. This requires some dynamic elements to be passed to the chat

Because of my scientific background I like being convinced rather than persuaded of the value of each piece of the stack. In that spirit, Guillaume and I decided to run a test for a day of shutting down the MadKudu scoring. During that time, any visitor that Clearbit could find information for would be contacted through Drift’s chat.

The result was an utter disaster. The Sales team ran away from the chat as quickly as possible. And for a good cause. About 90% of Segment’s traffic is not qualified for Sales, which means the team was submerged with unqualified chat messages…

This was particularly satisfying since it proved both assumptions that:
1- our scoring was a core component of the activation energy and that an MVP couldn’t fly without it
2- shipping too early – without all the components – would have killed the experiment

This experiment is now one of the top sources of qualified sales opportunities for Segment.

So what’s the alternative?

Moderation is the answer! Leverage the frameworks from the “Lean Startup” model with parsimony. Focus on predicting the activation energy required for your customers to get value from the experiment. Define your MVP based on that activation energy.

Going further, you can work on identifying “catalysts” that reduce the potential barrier for your experiment.

If you have any growth experiment you are thinking of running, please let us know. We’d love to help and share ideas!

Recommended resources:
https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
https://hbr.org/2016/03/the-limits-of-the-lean-startup-method
https://venturebeat.com/2013/10/16/lean-startups-boo/
http://devguild.heavybit.com/demand-generation/?#personalization-at-scale

Images:
http://fakegrimlock.com/2014/04/secret-laws-of-startups-part-1-build-right-thing/
https://www.britannica.com/science/activation-energy
https://www.infoq.com/articles/lean-startup-killed
https://en.wikipedia.org/wiki/Activation_energy

Improve your behavioral lead scoring model with nuclear physics

According to various sources (SiriusDecision, SpearMarketing) about 66% of B2B marketers leverage behavioral lead scoring. Nowadays we rarely encounter a marketing platform that doesn’t offer at least point based scoring capabilities out of the box.

However, this report by Spear Marketing reveals that only 50% of those scores include an expiration scheme. A dire consequence is that once a lead has reached a certain engagement threshold, the score will not degrade. As put it in the report, “without some kind of score degradation method in place, lead scores can rise indefinitely, eventually rendering their value meaningless.” We’ve seen this at countless companies we’ve worked with. It is often a source of contention between Sales and Marketing.

So how do you go about improving your lead scores to ensure your MQLs get accepted and converted by Sales at a higher rate?

Phase 1: Standard Lead scoring

In the words of James Baldwin, “If you know whence you came, there are absolutely no limitations to where you can go”. So let’s take a quick look at how lead scoring has evolved over the past couple of years.

Almost a decade ago, Marketo revolutionized the marketing stack by giving marketers the option to build heuristical engagement models without writing a single line of code. Amazing! A marketer, no coding skills required, could configure and iterate over a function that scored an entire database of millions of leads based on specific events they performed.

Since the introduction of these scoring models, many execution platforms have risen. The scoring capability has long become a standard functionality according to Forester when shopping for marketing platforms.

This was certainly a good start. The scoring mechanism had however 2 major drawbacks over which much ink has been spilt:

  • The scores don’t automatically decrease over time
  • The scores are based on coefficients that were not determined statistically and thus cannot be considered predictive

Phase 2: Regression Modeling

The recent advent of the Enterprise Data Scientist, formerly known as the less hype Business Analyst, started a proliferation of lead scoring solutions. These products leverage machine learning techniques and AI to accommodate for the previous models inaccuracies. The general idea is to solve for:  

Y = ∑𝞫.X + 𝞮

Where:

Y is the representation of conversion
X are the occurrences of events
𝞫 are the predictive coefficients

 

So really the goal of lead scoring becomes finding the optimal 𝞫. There are many more or less sophisticated implementations of regression algorithms to solve for this, from linear regression to trees, to random forests to the infamous neural networks.

Mainstream marketing platforms like Hubspot are adding to their manual lead scoring some predictive capabilities.

The goal here has become helping marketers configure their scoring models programmatically. Don’t we all prefer to blame a predictive model rather than a human who hand-picked coefficients?!

While this approach is greatly superior, there are still a major challenge that need to be addressed:

  • Defining the impact of time on the scores

After how long does having “filled a form” become irrelevant for a lead? What is the “thermal inertia” of a lead, aka how quickly does a hot lead become cold?

Phase 3: Nuclear physics inspired time decay functions

I was on my way home some time ago, when it struck me that there was a valid analogy between Leads and Nuclear Physics. A subject in which my co-founder Paul holds a masters degree from Berkeley (true story). The analogy goes as follows:
Before the leads starts engaging (or being engaged by) the company, it is a stable atom. Each action performed by the lead (clicking on a CTA, filling a form, visiting a specific page) results in the lead gaining energy, thus furthering it from its stable point. The nucleus of an unstable atom will start emitting radiation to lose the gained energy. This process is called the nuclear decay and is quite well understood. The time taken to free the energy is defined through the half-life (λ) of the atom. We can now for each individual action compute the impact over time on leads and how long the effects last.

Putting all the pieces together we are now solving for:

Y = ∑𝞫.f(X).e(-t(X)/λ) + 𝞮

Where:

Y is still the representation of conversion
X are the events
f are the features functions extracted from X
t(X) is the number of days since the last occurrence of X
𝞫 are the predictive coefficients
λ are the “half-lives” of the events in days

 

This approach yields better results (~15% increase in recall) and accounts very well for leads being reactivated or going cold over time.

top graph: linear features, bottom graph: feature with exponential decay

 

Next time we’ll discuss how unlike Schrödinger’s cat, leads can’t be simultaneously good and bad…

 

Credits:
xkcd Relativistic Baseball: https://what-if.xkcd.com/1/
Marketo behavioral lead score: http://www.needtagger.com
Amplitude correlation analysis: http://tecnologia.mediosdemexico.com
HubSpot behavioral lead score: http://www.hubspot.com
MadKudu: lead score training sample results

What we can learn from Ants to improve SaaS conversion rates

SaaS onboarding is the beating heart of your business. In our era of freemium, trials and other piloting processes, ramping up prospects who signed up for your product can make or break your forecasts. Increasing free-to-paid conversion rates is therefore a daunting task. You may feel overwhelmed by the incredible amount of factors you can tamper with. The myriad of solutions out there while doing a great job at solving specific problems rarely help identify the main levers of improvement for SaaS conversion rates.
Today, we’ll discuss an approach to identifying these levers and how to execute against them.

Ant colony optimization

At this point you might be wondering what’s this business about Ant Colonies helping improve SaaS conversion rates.
In the real world, ants have developed a rather intriguing heuristic to optimize their path to food patches. They initially wander in random directions away from the colony, laying a pheromone trail on their path. As they find food and return, they increase the amount of pheromone on the path to the food. The other ants from the group are attracted to the strongest trail which will be the closest to a food source. As the pheromones evaporate, the shortest paths become increasingly more attractive until the optimal path is found.This optimization algorithm is called the ant colony algorithm. Its goal is to mimic this behavior with “simulated ants” walking around the graph representing the problem to solve.

At MadKudu, we’ve built such an algorithm and its goal is to mimic this behavior with “simulated ants” (trial users) walking around the graph (performing sequences of events) representing the problem to solve.

Identify milestone events

You’ve probably heard about Facebook’s famous “7 friends in 10 days“. The key drivers of conversion, or “key conversion activities” are user activities that are most associated with conversion. Identifying those key activities allows to focus your engagement efforts on things that truly move the dial. For example, you can write content that most effectively helps users get value from the product, and convert them.

At MadKudu, we use a standard decomposition of onboarding events into 3 groups. Using advanced analytics, we identify and distinguish between those 3 types of activities:

Activation Activities

These are activities that users absolutely need to do to convert, even though doing them does not indicate they will convert. In other words, they are required but not sufficient.
These activities are typically things like “setting up an account” or “finishing the onboarding steps” or “turning on a key integration”.

Engagement Activities

These are the core activities of your product. This is where users get recurring value from your product. Users who perform these activities often will convert. Those who don’t will most likely not.
The key is to find which activities truly matter and how many occurrences are necessary until the point of diminishing returns is reached.

Delight Activities

These are activities that are done by few users, your most advanced users. Users who don’t do those activities are not less likely to convert. But those who do are very likely to convert.
Make sure to identify what these activities are and promote them to advanced users when the time is right.

DIY

In order to map out your onboarding events, you can calculate for each event:
– the conversion rate of users who performed the event: P(X)
– the conversion rate of those who did not: P(¬X)

You can then determine the impact of performing the event (average conversion – P(X)) and the impact of not performing the event (average conversion – P(¬X)).

Finally you can graphically represent your onboarding events as such:

Anything on the left is a requirement to have a chance to convert. Anything at the top is strongly correlated to converting.

Or you can contact us ;-)

From analytics to results

There are many ways to make this actionable, here are just a few:

  • Create a behavioral onboarding drip (case study)
  • Close more delighted users by promoting your premium features
  • Close more delighted users by sending them winback campaigns after their trial (50% of SaaS conversions happen after the end of the trial)
  • Adapt your sales messaging to properly align with the user’s stage in the lifecycle and truly be helpful

If you’d like to dive deeper into your onboarding funnel or discuss implementing some of the tactics above, you can signup for MadKudu or reach out to us.

Photo: www.cusuitemusings.com/
Image: Multiobjective Optimization of an Operational Amplifier by the Ant Colony Optimisation Algorithm (http://article.sapub.org/)
Plot: MadKudu “Happy Path” Analysis Demo Sample

3 reasons why B2B SaaS companies should segment trial users

99% of the B2B SaaS companies I talk to don’t segment their free trial users.

This is a shame because we all know our trial users can be very different from one another.

For example, have you heard of accidental users? Those users signed up thinking your products was doing something else and leave soon after realizing their mistake (much more common than what you might think!).

Or what about tire-kickers? Yes, a surprisingly large number of people like to try products with no intention of buying ever (more about it in this great post from Matt Pope).

There are also self-service users. They are actively evaluating your product but don’t want to talk to a human being, especially a sales person.

The enterprise buyer is an interesting profile. She will likely buy an expensive plan and will appreciate to get help from an account executive.

 

“Sure thing… why should I care now?”

Fair question. Here is what happens when little is done to identify the different types of trials.

1. The overall conversion funnel has little meaning

A SaaS company we work with was worried because their trial-to-paid conversion rate had decreased 30%. Is this because of the new product feature they just released? Or maybe there is an issue with the email drip campaign? The explanation was simpler: A large number of tire-kickers coming from ProductHunt suddenly signed up. Their very low conversion rate crashed the overall conversion rate.

Looking at the trial-to-paid funnels by customer segment is the best way to understand how your product and sales activities affect conversions, regardless of variations in customer signups.

2. You are selling and building the wrong product features

Understanding how your product is used is essential to effectively sell and improve your product.

But looking at overall product usage metrics is misleading. The accidental users and tire-kickers usually make up a large chunk of your customers. Looking at overall usage metrics means that you may well be designing your sales and product strategy to fit your worst customer segments!

When looking at product usage, make sure to focus on your core user segment. The features they care about are the features to sell and improve.

3. You are spending your time and money on the wrong trial users

There are lots of ways in which a lack of segmentation hurts your sales and customer success efforts:

  • Tire-kickers take away precious time from sales and customer success. This time could be spent on selling and helping core users.
  • Customers with high potential value don’t get extra love. Many sales teams spend huge amounts of time on tiny customers while underserving larger customers.
  • Trying to get buyers to use your product and trying to get users to buy is a waste of everybody’s time. In B2B, the buyer is often not a heavy user. For example, a CTO will pull the credit card and pay for an app monitoring software, but he or she will use the software only occasionally. Educating the CTO on the nuances of the alert analysis feature doesn’t help anyone!
  • Sales trying to engage self-service users hurts conversions. Some users appreciate having an account representative help them evaluate a product while others want to do their evaluation on their own. Knowing who’s who is critical for both customers and sales teams.

 

How to get started?

One way, of course, is to use MadKudu (passionate, self-interested plug). Otherwise the key is to start simple. Talk to your best customers to get a qualitative feel of who they are, and look at your customer data to find out what similar characteristics are shared by your best customers. Then put together a simple heuristic to segment your customers and implement this logic in your CRM and analytics solution.

This effort will go a long way to increase your trial-to-paid conversion rates.

Now back to you. Do you have different segments for your trial users? If no, why not? If yes, what are those segments? Who is using them? Continue the conversation on twitter (@madkudu) or email us hello@madkudu.com!

Achieving personalization at scale in B2B sales

I was trying to write a title as pompous and with as many buzz words as possible and I do believe I’m close. Who knows we might even get featured on TechCrunch with these ramblings on how “big data” is enabling the ultimate phase of the B2B sales & marketing revolution…

Over the past few weeks at MadKudu, we’ve run a thorough retrospective on 2016 to flesh out what we’ve learnt, which hypotheses were validated, which were proven wrong.
The exciting learning is that we’re onto something big, something HUGE!
We’ve validated the fact that lead prioritization enablement was commonly sought. But more importantly we’ve realized that lead scoring solutions as they exist today are only duct-tape on a broken process. Since companies aren’t able to handle personalized onboarding at scale, they reduce the scale by focusing on a subset of leads to manually personalize the experience for. Welcome to the world of the inbound SDR. MadKudu is set to change this and bring us one step closer to completing the marketing & sales revolution by operationalizing personalization (channel, message…) at scale.
In essence the main actionable learning is that operationalization is 10x more valuable than enablement. It’s actually a completely different sport.

The Sales & Marketing Revolution

The term revolution is mainly used to describe an overthrow of an order in favor of a new one. But the root of the words tie back to the concept of going full circle. So when we talk about the sales & marketing revolution we mean we’re getting back to a previous state. While we’ll dedicate a specific post to this topic, a high-level history of marketing would go as such:
– Before the industrial revolution, people bought from local stores and suppliers. This was the era of one-to-one personalization of the product to the customer’s needs.
– The industrial revolution changed everything, the product was now king. Our newly discovered ability to mass produce meant we needed to find ways to ship these products. This started the era of the marketing mix’s 4P (product, price, promotion, placement) in marketing.
– In more recent days, the rise of the internet 2.0 marked the rise of the SDR. With online products being available for billions of people and marketing strategies still focusing on bringing in as many prospects as possible, there was a new need to qualify potential customers.
– The “big data” revolution. Data science has started powering personalization and relevance at scale in eCom marketing for a few years now. Amazon led the charge with its recommendation engine and many companies have since then applied data science to make the B2C sales experience more relevant (at AgilOne, we did a lot of this). The shift from the 4Ps towards the 5Cs is another illustration of this trend of putting back the customer at the center of marketing activities.

What “big data” brings to Sales

There is a common misconception that big data equates huge quantities of data and thus is more appropriate for marketing than sales and for B2C rather than B2B. But there are really 3 aspects to big data:
– massive data sets (high volume)
This is what companies like facebook, google deal with. We’re talking trillions of records of data to process. The main challenge here is scalability and is only seen in B2B2C or B2C companies.
– fast data (high velocity)
This is what real time analytics systems deal with. Recommender systems, trading algorithms are great examples of systems dealing with high velocity data.
– complex data sets (high variety)
Here’s the least sexy and known aspect of the lot. B2B companies generate big data with customer records coming from sales data, product usage, customer records, support tickets… While real-time analytics and scalability are challenges the hard nut to crack is the identity layer or combination of all the information in a comprehensible data set. Machine Learning algorithms will only ever be as good as the input they are fed.

Why is B2B Sales broken

The final aspect has been ankylosing the B2B space and has thus become a great source of innovation. Companies are spending billions of dollars to get their data together (getBirdly, Jitterbit), stitching it together (leanData, AgilOne). The hardest part though remains in rendering the data actionable. This is where Big data can help reach the holy grails of sales and marketing: “personalization to foster relevance, at scale”.
Lead scoring tools so far have been built with this in mind. They leverage the multitude of data points available to automate -to some extent- the qualification historically run by SDRs.

BANT Qualification process:
B => mainly firmographic data to determine if the account would have budget for your top tier pricing
A => mainly demographic to determine how close is this person to having a budget line item for your product
N => mainly firmographic to determine if the account likely to be a successful user of your product or at least have a need for it
T => mainly behavioral to determine if the account’s aggregated behavior is indicative of a strong likelihood to purchase your product in the near future.

And so this is where big data has been helping so far. Lead scoring solutions have been doing a great job at getting SDRs to focus on a small subset of leads that they can then write personal emails to through bulk email solutions like Yesware or Salesloft…

Where this approach falls short is that sending emails manually don’t make them personal, let alone relevant. We all receive tens of emails like this every day:
right_person_email

From cartography to self-driving cars

A couple weeks ago, Guillaume Cabane, VP Growth at Segment, made a striking analogy between cartography and B2B sales. Cartography is the representation of the overall landscape of your leads. It is used to determines the routes you need to follow to reach your destination. This is your initial ideal customer profile analysis. The GPS is an automated way of telling how to get to your destination. This is lead scoring as we know it today. The self-driving car is build upon a GPS and executes the commands reliably and automatically. This is the future of B2B sales, the idea of a “software SDR”.
In essence, the great opportunity to seize in 2017 lies in realizing the era of the GPS as a stand alone tool is over. We are now heading into a world of self-driving cars.
Not only are we convinced about this, the early tests we’ve been running so far are encouraging. Our software SDR has consistently outperformed by at least 66% regular SDRs on the amount of qualified demos booked. Not only were we generating more meeting, we also free-ed up time for the sales team so they could focus on what they do best: adding value to prospects whom we’ve engaged with them.

Here’s to 2017, year of the true sales automation!

Image credit : A future lost in time