The biggest source of friction in the customer journey is you

Ten years ago Amazon introduced same-day delivery, probably the single most important feature in cementing their dominance of the eCommerce industry. They did this after 10 years of innovating on the online shopper experience – recommended purchases, one-click payments, experiments on how website latency affected conversion rates – and they understood that the biggest source of friction in their buyer experience was waiting for your package to arrive.

We all have an idea in our head about what makes a great customer journey, a great buyer experience. When Francis asked me out of the blue, the first thing that came to mind was my experience buying an engagement ring last year, but I could just as easily point to the experience of creating a new Slack team. They are magical experiences. You never see what’s going on behind the curtain, and you never have any downtime to think about it. Is my package already in Paris? How did Amazon know what I was going to order? Doesn’t matter. It’s already arrived before I can begin to comprehend how they possibly do that at scale.

For SaaS companies today, increasing revenue is often about removing friction. The product team designs and improves features so that customers don’t have time to wonder whether the competition is building a better product. Customer Success is looking at customer health metrics to identify customers before they even think about churning and improve their results.

Marketing & Sales have a plethora of data & engagement tools so that they know everything about who their engaging with from Clearbit-enhanced Drift Bots to segmented Outreach campaigns encouraging prospects to signup for Webinars or jump on a call.

You are the friction.

"You start building this vision of what you want the customer journey to be, but you don't realize how far removed you are from your customer."

So why is it that 90% of SaaS companies take more than five minutes to follow up on a request to schedule a demo? Francis suggests going through your own customer journey – ideally by signing up with a friend’s email account, especially if your friend is a great fit for your product – to get the full experience. If it’s not the ~48 hours of follow-up time that’ll make you feel the friction, it’s the ~5 days between the demo request and the phone call that’ll make you rethink your process.

What makes it take so long?

  • Lead data enhancement
  • Territories/routing rules
  • SDR first-response latency
  • Email back-and-forth to validate interest and find a time to talk.

It’s easy to understand each one of those steps – after all, everything above (accept maybe the emails) feel very logical – the only thing that’s missing in the equation is the customer experience. SaaS companies are eager to over-optimize for the sake of being fair, applying rigorous rules to lead assignment, and this often flies in the face of the customer journey.

One of MadKudu’s most popular features, the Fastlane – an enhancement to signup forms that allows highly-qualified leads to skip the form and go straight to a sales rep’s calendar – is often difficult to implement initially because lead routing takes minutes. The customer eats the friction because of operational friction.

Remove friction. Prioritize customers.

It’s easy to remove friction from the customer journey if you prioritize it. Calendly, for example, offers a great Team Scheduling feature that allows prospects to see an aggregate calendar for every potential representative and then choose a time that works for them, instead of displaying the representative’s calendar after they’ve been round-robined with less available time slots. This puts the customer in the priority seat and sacrifices the possibility that Rep’s who have less immediate availability in their calendar might get routed less leads. In fact, that’s not a bad forcing function for making sure SDRs are prioritizing their time correctly.

 

 

 

How we use Zapier to score Mailchimp subscribers

There’s no better way to get your story out there than to create engaging content with which your target audience identifies. At MadKudu, we love sharing data-driven insights and learnings from our experience working with Marketing Operations professionals, which has allowed us to take the value we strive to bring our customers every day and make it available to the marketing ops community as a whole.

As interest in our content has grown, it was only natural that we leverage Zapier in order to quickly understand who was signing up and whether we should take the relationship to the next level.

Zapier is a great way for SaaS companies like us to quickly build automated workflows around the tools we already use to make sure our customers have a frictionless relevant journey. We don’t want to push every Mailchimp subscriber to Salesforce, because not only would that create a heap of contacts that aren’t sales-ready, but we may end up inadvertently reaching out to contacts who don’t need MadKudu yet, giving them a negative first impression of us as a potential customer.

Today we are able to see who is signing up for our newsletter that sales should be paying attention to, and let’s see how:

Step 1: Scoring new newsletter subscribers

The first step is to make sure you grab all new subscribers. Zapier makes that super easy with their Mailchimp integration

Next we want to send those new subscribers to MadKudu to be analyzed. While MadKudu customers have a dedicated MadKudu integration, Zapier users who aren’t a MadKudu customer can also leverage Zapier’s native Lead Score app, which is (you guessed it) powered by MadKudu.

Step 2: Filter by Lead Score

We’ve got our MadKudu score already configured so after I feed my new subscriber to MadKudu, I’m going to run a quick filter to make sure we only do something if the Lead Score is “good” or “very good.”

If you’re worried that the bar will filter out potentially interesting leads, consider this a confidence test of your lead score.

Zapier Filtering by Lead Score Quality

Step 3: Take Action, Communicate!

For MailChimp signups that pass our Lead Score filter, we next leverage the SalesForce integration in Zapier to either find the existing contact inside Salesforce (they may already be there) or create a new lead. SalesForce has made this very easy to do with the “Find or Create Lead” action in Zapier.

Once we’ve communicated synced our Mailchimp lead to Salesforce, we use the Slack integration on Zapier to communicate everything we’ve created so far to a dedicated #notif-madkudu channel, which broadcasts all the quality leads coming from all of our lead generation channels.

Directly inside Slack, our team can get actionable insights:

  • The MadKudu score, represented as 3 Stars (normal stars for Good/ twinkling for Very Good)
  • The signals that MadKudu identified in this lead, both positive and negative
  • A link to the lead in Salesforce, for anyone who wants to take action/review

Actionable Lead Scoring applied to your Newsletter

Our goal here isn’t to reach out to newsletter subscribers – we want to build a long-term relationship with them, and we’re happy to keep delivering them quality content until their ready to talk about actionable lead scoring. What we’re able to do is see qualitatively & quantitatively the number of newsletter subscribers we have who are a good fit for MadKudu today.

This helps marketing & sales stay aligned on the same goal. Marketing is measuring newsletter growth with the same metric its using to measure SQL generation.

Segmenting Funnel Analysis by Customer Fit

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations – like applying lead scoring to funnel analysis. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

A lead score is the foundation for your marketing & sales alignment. It creates accountability for both teams and is the foundation of a strong Sales SLA. A foundation is only as useful as what you build on top of it, and that’s why we talk about Actionable Lead Scoring – leveraging your lead score to create a frictionless journey. Today we’re going to focus on how you can leverage your lead score in funnel analysis to see where your best leads are falling off.

Funnel Analysis & Actionable Intelligence

Understanding the customer journey’s inflection points and conversion rates is essential to scaling & maintaining success as a software company; however, the analysis you’re doing is just as important as the data you’re using to generate that analysis.

The goal of funnel analysis is to look at ways to remove friction from the customer journey, to improve activation & conversion, and to make sure that the users who should engage most with your product do. Accomplishing that goal without segmenting by lead score is like turning every lead into an opportunity in sales force and then trying to improve your deal won rate. You need to start with the right metric by answering the right question: what are my best leads doing and how can I make their journey better?

If you're not applying lead score to funnel analysis, you're making decisions based on flawed data.

Applying Lead Score to Funnel Analysis

Let’s imagine you want to look at the first 15 days of user activity in your self-service product, which corresponds to your 14-day free trial and immediate conversion. Of course, you already know that 50% of conversion on freemium occurs after the trial expires, but you’re looking to identify engagement drop-off before the trial even expires. After all, customers can’t convert if they don’t stay active.

A simple cohort analysis of all users who signed up over a two-week period would show that over 60% are dropping off in the first 24 hours, a smaller chunk 5 days out, and another group at the end of trial. You might conclude that you need to rework your onboarding drip campaign’s first emails in order to combat that big next-day dropoff. That would make sense, except are the people who are dropping off the prospects that matter most? Probably not.

Very good leads have a different funnel than very bad leads

One MadKudu came to this exact same conclusion, and despite various drip campaign tests, they didn’t see that 60% drop-off move. Then we segmented their  funnel analysis, looking at how very good, good, bad & very bad leads acted, and we found that most of that 60% drop-off was very bad leads: they had made their sign-up process so frictionless that they were getting spam sign-ups who were never going to actually use their product. As it turned out, that small dip after 5 days corresponding to the biggest area of drop-off for very good leads, who were dropping off at the end of their intense drip campaign which only lasted 5 days.

In this case, not segmenting by customer fit completely masked where their focus should be, and they spent time trying to get spam signups to stay engaged with their product instead of looking at how their highest value prospects were engaging with their product.

Our recommended Setup

If you’re looking to start segmenting funnel analysis by Customer Fit, our recommended MarTech stack is to feed MadKudu into product analytics solution Amplitude using Segment‘s customer data platform.

Account-Based Engagement and the Fallacy of Job Titles

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations, such as Account-Based Engagement. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

As we saw recently with the Sales SLA, the path to alignment often starts & ends with clear definitions of metrics. The leads marketing hands to sales need to have the same definition & measurement for success, which is where actionable lead scoring plays a key role in establishing lasting alignment.

If we step back from Sales & Marketing and look at aligning each department to business objectives, we can see that metric disjunction can result in each individual team being successful while ultimately failing to create a relevant customer journey at scale.

The fallacy of job titles

One area where we often observe this is when we run funnel analysis by customer fit and look at job titles as predictors of activation and conversion. On self-serve tools such as API-based products, we often see that someone with a developer title is more likely to activate but very unlikely to convert (that is, to hand over the credit card), whereas someone with a CEO/owner title is more likely to give a credit card, but less likely to convert.

One analysis we recently ran for a customer demonstrated that perfect:

How job title affects conversion | Account-Based Engagement

  • Developers convert 60% less than the average user
  • Founders, CEOs & marketing convert 70-80% than the average user.

When we look at conversion & activation side-by-side for this same customer, the number speak for themselves:

Conversion vs. Activation | Account-Based Engagement

  • Founders/CEOs don’t use the software that much but end up converting higly
  • Product & Project managers have a higher activation but lower conversion rate

Product teams are historically motivated by increasing activation by building an increasingly engaging product; however, a developer is unlikely to respond to marketing’s nurturing emails or jump on a first sales call no matter how active they are on the product.

Likewise with more sales-driven products like enterprise software, SDRs are often singularly focused on the number of meetings they can generate for their AEs; however, low-level team members are significantly more likely to jump on a phone call and significantly less likely to convert as compared to their director counterpart.

In both of these instances, we see that product & sales development are able to optimize for their metric without accomplishing the core business objective of creating a great customer journey.

How Account-Based Engagement changes the rules

What this comes back to is account-based engagement, a nascent terminology in the marketing space stemming from the principal of account-based marketing but extending it across the entire customer journey and to all customer-facing teams. Where account-based marketing encourages running campaigns to generate interest not at the individual lead level but the account level – especially important when you have multiple stakeholders in the decision-making process – account-based engagement extends that to all teams, meaning that:

  • Product teams should seek not only to make as many active users as possible, but to create active accounts: building features that encourage getting other stakeholders involved or making it easy for your hero to evangelize your product value to other stakeholders.
  • Marketing teams should not seek to generate marketing qualified leads but marketing qualified accounts, including nurturing existing accounts in order to get other stakeholders involved so as to set sales up for success
  • SDRs should not seek to generate meetings at the account level, not at the lead level, and shouldn’t be working on accounts where the necessary stakeholders are not already involved.

Account-Based Engagement | Identifying hidden opportunities

We’ve been recently working with two of our bigger customers who have a prosumer user base to identify marketing-qualified accounts that aren’t getting attention. We do this by looking not only at customer-fit at the account level – does the account look like the type of accounts that typically convert when sales engages – but also at behavioral-fit: are they engaging with the product the way paying customers typically do?

Sales reps who are qualifying leads as soon as the account is created aren’t going to be able to sift through the hundreds of warm accounts to identify which accounts have engaged properly (and been properly engaged) to be sales-ready; however, this is core to Account-Based Engagement. Just as our Sales SLA gives a common metric for marketing & sales to work towards, so Product, Customer Success, Sales & Marketing all need to have a common qualification criteria for an account in order to be aligned on how best to achieve business goals.

Remember: In B2B, you’re not selling to users, you’re selling to Accounts

The goal is not to reduce all teams to a single metric like revenue-generated, but rather to help reduce the natural tendency to game a metric by linking a common thread between the metrics that we use to measure success. That thread is Accounts.

It is all too easy to lose track of the fact that selling B2B software means that a company is going to buy your software, not a person. There are users, decision-makers, stakeholders and other advisors in the buying process, but at the end of the day a company is going to make a decision about whether to pay another company for their solutions. In this respect, every team should be focused on how to acquire, activate, convert & retain accounts, because at the end of the day it is not a user that will churn but an account.

 

 

Sales SLA: how accountability fosters sales & marketing alignment

Every week during our check-in, MadKudu Co-Founder & CRO Francis Brero & I talk about our current priorities. Our regular call also become an opportunity for Francis to download some knowledge from his time working with some of the top SaaS Sales & Marketing organizations. What started as an effort to onboard me with recordings & note-taking has turned into a series I call MadOps.

I first heard about a Sales SLA in my first week after joining MadKudu. I was familiar with a Service Level Agreement (SLA) – a commitment from the engineering team around reliability with varying repercussions if we violated the SLA – but I had never interacted with a Sales SLA, despite being in marketing.

When one of our customers is having trouble with hitting revenue goals, the Sales SLA is almost always where we start, so let’s start there.

Sales SLA: A contract between Sales & Marketing

A Sales SLA is an agreement between marketing & sales whereby:

Marketing commits to generate N Very Qualified Leads per quarter, and

Sales commits to reach out to 99% of those leads within H hours, and to contact them at least N times in the first D days

Most marketing teams have a quarterly lead generation goal. A Sales SLA doesn’t measure MQLs or SQLs – it measures Very Qualified Leads: MQLs with the potential to become customers. Marketing agrees to create enough expected revenue, and Sales agrees to convert it into the revenue target.

“The only people who create value out of nothing is Marketing. The role of sales is to keep the value of those leads constant until they close.”

Marketing not only needs to generate increasing amounts of value but to be able to measure its potential to become revenue.

Sales needs to reach out quickly and to continue to connect with that lead enough to feel like everything possible was tried. A typical adage is “8 times in 15 days”, but again, this varies for each customer journey.

Each variable of a Sales SLA comes with its own questions: what makes a lead very qualified? How many touch points and how quickly should a lead be reached out to? Should it vary based on lead source?

“Do we need a Lead Score?”

The Sales SLA requires scoring each lead as they sign up. Many early-stage SaaS companies wonder how they are supposed to have a Sales SLA from day one without having a lead score.

Let’s put it out there: everyone has a lead score.

Filtering spam at signup is scoring. Escalating fortune 100 companies at sign up is scoring. While simple, it allows you to begin defining lead quality by answering “who do you want to ignore and who do you want to talk to?”

Since everyone has a lead score, everyone therefore should have a Sales SLA. The earliest iteration can be simple: “If someone signs up through a demo form, you need to follow up faster than if they sign up for a trial.” Putting something simple in place is better than nothing at all.

Implementing a Sales SLA

The tactical owner of a Sales SLA will almost always be Sales Operations, because they are ultimately the ones managing SDR workflows today. Marketing tends to ask for a Sales SLA. It ends the cycle of sales bemoaning lead quality and marketing bemoaning sales conversion rates. The Sales SLA will move that existential, emotional debate to a practical, data-driven report.

In order for you to be able to maintain a Sales SLA properly, you’ll need to be able to track all outbound communication inside your CRM. If you’re using third party emailing tools, every email you send out needs to be tied to a lead as an activity. Otherwise you’ll get false positives or adjust your sales SLA based on current activity metrics.

Contract & Education

A Sales SLA doesn’t have to be written; however, in practice, a written agreement can be useful for onboarding new SDRs. Every new SDR should know what their team expects of them from day one. And every SDR should know what happens if they don’t respect it.

When the Sales SLA is broken, some organizations choose to put leads back into round robin. Others send it to a marketing nurturing funnel, or escalate it to a manager. How you implement the Sales SLA is up to you, as long as you’re tracking the metrics necessary to uphold it.

Once your Sales SLA is in place, much like a infrastructure monitoring tool, you should be able to detect outlier scenarios more quickly. SDRs may be on vacation or no longer with the company and still get leads routed to them. Certain campaign leads may get bulk routed to an old admin account. Or new team members may get routed leads before they’ve learned about the Sales SLA. None of these problems are anyone’s “fault,” but they need to be noticed & dealt with quickly.

Procrastination in Hyper-Growth

Sales SLAs can look daunting on paper, especially if you’re still in the early days of building your sales organization. At it’s core, a Sales SLA defines the handoff between marketing & sales. At MadKudu, for example, the handoff happens at signup today. Sales handles everything after lead generation, because we don’t yet have a need to automate that part of the funnel. We have a number of indicators (company size, technologies used, etc.) that we know correspond closely to someone needing MadKudu. This allows us to be pretty explicit about what makes a lead Very Qualified.

“People don’t put SLAs in place because they want to avoid having tough conversations”

Creating a Sales SLA is going to shed a spotlight on all the cracks in your sales funnel, especially when you’ve been in dealing with hyper-growth recruiting. If leads aren’t getting followed up on, you’re going to have to look at what the cause is. Are you understaffed? Are you not scoring/routing/prioritizing properly? Or are your Sales reps not reacting quickly enough?

When a Sales SLA is breached, it’s a symptom of a bigger problem, and usually no single person is at fault. Without a Sales SLA, it’s easy to overlook one of your sales reps not following up, or low-quality leads getting faster follow-up than high-quality leads. 

Start the discussion around Sales SLAs early and you’ll address problems that won’t go away unless you shed light on them.

How MadKudu makes Salesforce Einstein better

…Or why Salesforce Einstein won’t be the next IBM Watson.

Is the AI hype starting to wither? I believe so, yes.
Reality of the operational world is slowly but steadily catching up with the idyllic marketing fantasy. The report Jeffries put together challenging IBM Watson proves alarm bells are ringing. The debacle of the Anderson implementation goes to show how marketing promises can be unrealistic and the downfall will be dreadful.

With that said, not all is lost as we keep learning from our past mistakes. Being part of the Salesforce Einstein focused incubator, we are witnessing first-hand how the CRM giant is looking to succeed where Watson and others are struggling. Hopefully these insights can help others rethink their go-to-market strategy, in an era of unkept commitments.

Salesforce, a quick refresher

A few weeks ago, I was being interviewed for an internal Salesforce video. The question was “how has the Salesforce eco-system helped your startup?”. To contextualize my thoughts it’s important to know that while Salesforce is one of our main integrations, we consider it as an execution platform among others (Segment, Marketo, Intercom, Eloqua…). I’ve always admired Salesforce for its “platform” business model. Being part of the Salesforce ecosystem facilitated our GTM. It gave MadKudu access to a large pool of educated prospects.

However I believe the major value add for startups is the focus induced by working with Salesforce customers. Since Salesforce is a great execution platform there are a plethora of applications available addressing specific needs. This means, as a startup, you can focus on a clearly defined and well delimited value proposition. You can rely on other solutions to solve for peripheral needs. As David Cohen reminded us during our first week at Techstars, “startups don’t starve, they drown”. Salesforce has helped us stay afloat and navigate its large customer base.

What is Salesforce Einstein?

I’m personally very excited about Salesforce Einstein. For the past 5 years, I’ve seen Machine Learning be further commoditized by products such as Microsoft Azure, Prediction.io… We’ve had many investors ask us what our moat was given this rapid democratization of ML capabilities and our answer has been the same all along. In B2B Sales/Marketing software, pure Machine Learning should not be considered a competitive advantage mainly because there are too few data sets available that require non-generic algorithms. The true moat doesn’t reside in the algorithms but rather in all the aspects surrounding them: feature generation, technical operationalization, prediction serving, latency optimization, business operationalization… The last one being the hardest yet the most valuable (hence the one we are tackling at MadKudu…).
Salesforce Einstein is the incarnation that innovation will be in those areas since anyone can now run ML models with their CRM.

We’ve been here before

Just a reminder, this is not a new thing. We’ve been through this not so long ago.
Remember the days when “Big Data” was still making most of the headlines on Techcrunch? Oh how those were simpler times…


Big Data vs Artificial Intellligence search trends over the past 5 years

There were some major misconceptions as to what truly defined Big Data especially within the context of the Enterprise. The media primarily focused on our favorite behemoths: google, facebook, twitter and their scalling troubles. Big data became synonymous for Petabytes and unfathomly large volumes of data more generally. However scholars defined a classification that qualified data as “big” for 3 reasons:
– volume: massive amounts of data that required distributed systems from storage to processing
– velocity: quickly changing data sets such as product browsing. This meant offline/batch processing needed an alternative
– variety: data originating from disparate sources meant complex ERDs had to be maintained

In the Enterprise, volume was rarely the primary struggle. Velocity posed a few issues to large retailers and companies like RichRelevance nailed the execution of their solution. But the main and most challenging data issue faced was with the variety of data.

What will make Salesforce Einstein succeed

Einstein will enable startups to provide value to the Enterprise by focusing on the challenges of:
– feeding the right data to the platform
– defining a business playbook of ways to generate $$ out of model predictions

We’ll keep the second point for a later blog post but to illustrate the first point with DATA, I put together an experiment. I took one of our customers’ dataset of leads and opportunities.
The goal was to evaluate different ways of building a lead scoring model. The objective was to identify patterns within the leads that indicated a high likelihood of it converting to an opportunity. This is a B2B SaaS company selling to other B2B companies with a $30k ACV.
I ran an out-of-the-box logistic regression on top of the usual suspects: company size, industry, geography and alexa rank. For good measure we had a fancy tech count feature which looked at the amount of technologies that could be found on the lead’s website. With about 500 opportunities to work on, there was a clear worry about overfitting with more features. This is especially true since we had to dummy the categorical variables.
Here’s how the regression performed on the training data (70% of the dataset) vs the test dataset (30% that were not used for training and ensuring if a company is part of the training it is not part of testing – see we did not fool around with this test)

Regression model using firmo and technographic featuresmodel performance on test dataset using available data points

Not bad right?! There is a clear overfitting issue but the performance is not dreadful apart for a blob in the center

Now we ran the same logistic regression against 2 feature: predicted number of tracked users (which we know to be highly tied to the value of the product) and predicted revenue. These features are the result of predictive models that we run against a much larger data set and take into account firmographics (Alexa rank, business model, company size, market segment, industry…) along with technographics (types of technologies used, number of enterprise technologies…) and custom data points. Here’s how the regression performed:

Regression with MadKudu featuresmodel performance on test dataset using 2 MadKudu features

Quite impressive to see how much better the model performs with fewer features. At the same time, we are less running the risk of overfitting as you can see.

The TL;DR is that no amount of algorithmic brute force applied to these B2B data sets will ever make up for appropriate data preparation.

In essence, Salesforce is outsourcing the data science part of building AI driven sales models to startups who will specialize in verticals and/or use-cases. MadKudu is a perfect illustration of this trend. The expert knowledge we’ve accumulated by working with hundreds of B2B SaaS companies is what has enabled us to define these smart features that make lead scoring implementations successful.

So there you have it, MadKudu needs Salesforce to focus on its core value and Salesforce needs MadKudu to make its customers and therefore Einstein successful. That’s the beauty of a platform business model.
I also strongly believe that in the near future there will be a strong need for a “Training dataset” marketplace. As more of the platforms make ML/AI functionalities available, being able to train them out-of-the-box will become an important problem to solve. These “training datasets” will contain a lot of expert knowledge and be the results of heavy data lifting.

Feel free to reach out to learn more

Images:
www.salesforce.com
Google trends
MadKudu demo Jam 3

PS: To be perfectly clear, we are not dissing on IBM’s technology which is state of the art. We are arguing that out-of-the-box AI have been overhyped in the Enterprise and that project implementation costs have been underestimated due to a lack of transparence on the complexity of configuring such platforms.

Are Automation and AI BS?

A couple weeks ago, I ended up taking Steli’s click bait and read his thoughts on sales automation and AI. There isn’t much novelty in the comments nor objections presented. However I felt compelled to write a answer. Part of the reason why, is that MadKudu is currently being incubated by Salesforce as part of the Einstein batch. Needless to say the word AI is uttered every day to a point of exhaustion.

The mythical AI (aka what AI is not today)

The main concern I have around AI is that people are being confused by all the PR and marketing thrown around major projects like Salesforce’s Einstein, IBM’s Watson and others – think Infosys Nia, Tata Ignio, Maana.io the list goes on.

Two months ago, at the start of the incubator, we were given a truly inspiring demo of Salesforce’s new platform. The use-case presented was to help a solar panel vendor identify the right B2C leads to reach out to. A fairly vanilla lead scoring exercise. We watched in awe how the CRM was fed google street view images of houses based on the leads’ addresses before being processed through a “sophisticated” neural network to determine if the roof was slanted or not. Knowing if the roof was slanted was a key predictor of the amount of energy the panels could deliver. #DeepLearning

This reminded me of a use-case we discussed with Segment’s Guillaume Cabane. The growth-hack was to send addresses of VIP customers through Amazon’s mechanical turk to determine which houses had a pool in order to send a targeted catalogue about pool furniture. Brilliant! And now this can all be orchestrated within the comfort of our CRM. Holy Moly! as my cofounder Sam would say.

To infinity and beyond, right?

Well not really, the cold truth is this could have also been implemented in excel. Jonathan Serfaty, a former colleague of mine, for example wrote a play-by-play NFL prediction algorithm entirely in VBA. The hard part is not running a supervised model, it’s the numerous iterations to explore the unknowns of the problem to determine which data set to present the model.

The pragmatic AI (aka how to get value from AI)

Aside from the complexity of knowing how to configure your supervised model, there is a more fundamental question to always answer when considering AI. This foundational question is the purpose of the endeavor. What are you trying to accomplish with AI and/or automation? Amongst all of the imperfections in your business processes which one is the best candidate to address?

Looking through history to find patterns, it appears that the obvious candidates for automation/AI are high cost, low leverage tasks. This is a point Steli and I are in agreement on: “AI should not be used to increase efficiency”. Much ink has been spilled over the search for efficiency. Henry Ward’s eShares 101 is an overall amazing read and highly relevant. One of the topics that strongly resonated with me was the illustrated difference between optimizing for efficiency vs leverage.

With that in mind, here are some examples of tasks that are perfect fits for AI in Sales:

  • Researching and qualifying
  • Email response classification (interested, not interested, not now…)
  • Email sentiment classification
  • Email follow up (to an email that had some valuable content in the first place)
  • Intent prediction
  • Forecasting
  • Demo customization to the prospect
  • Sales call reviews

So Steli is right: No, a bot will not close a deal for you but it can tell you who to reach out to, how, why and when. This way you can use your time on tasks where you have the highest leverage: interacting with valuable prospects and helping them throughout the purchase cycle. While the recent advent of sales automation has led to an outcry against the weak/gimmicky personalization I strongly believe we are witnessing the early signs of AI being used to bring back the human aspect of selling.

Closing thoughts

AI, Big Data, Data Science, Machine Learning… have become ubiquitous in B2B. It is therefore our duty as professionals to educate ourself as to what is really going on. These domains are nascent and highly technical but we need to maintain an uncompromising focus on the business value any implementation could yield.

Want to learn more or discuss how AI can actually help your business? Feel free to contact us

3 steps to determine the key activation event

Most people by now have heard of the “Product key activation event”. More generally, Facebook’s 7 friends in the first 10 days, Twitter’s 30 followers… get lots of mentions in the Product and Growth communities. Theses examples have helped cement the idea of statistically determining goals for the onboarding of new users. A few weeks ago, somebody from the Reforge network asked how to actually define this goal and I felt compelled to dive deeper into the matter.

I love this topic and while there’s already been some solid answers on Quora by the likes of Uber’s Andrew Chen, AppCues’ Ty Magnin and while I have already written about how this overarching concept a couple weeks ago (here) I wanted to address a few additional/tactical details.

Below are the three steps to identify your product’s “key activation event”.

Step 1: Map your events against the Activation/Engagement/Delight framework

This is done by plotting the impact on conversion of performing and not performing an event in the first 30 days. This is the core of the content we addressed in our previous post.

To simplify, I will call “conversion” the ultimate event you are trying to optimize for. Agreeing on this metric in the first place can be a challenge of itself…

Step 2: Find the “optimal” number of occurrences for each event

For each event, you’ll want to understand what is the required occurrence threshold (aka how many occurrences maximize my chances of success without hitting diminishing returns). This is NOT done with a typical logistic regression even though many people try and believe so. I’ll share a concrete example to show why.

Let’s look at the typical impact on conversion of performing an event Y times (or not) within the first X days:

There are 2 learnings we can extract from this analysis:
– the more the event is performed, the more likely to convert the users are (Eureka right?!)
– the higher the threshold of number of occurrences to perform, the closer the conversion rate of people who didn’t reach it is to the average conversion rate (this is the important part)

We therefore need a better way to correlate occurrences and conversion. This is where the Phi coefficient comes into play to shine!

Below is a quick set of Venn diagrams to illustrate what the Phi coefficient represents:

Using the Phi coefficient, we can find the number of occurrences that maximizes the difference in outcome thus maximizing the correlation strength:

Step 3: Find the event for which “optimal” number of occurrences has the highest correlation strength

Now that we have our ideal number of occurrences within a time frame for each event, we can rank events by their highest correlation strength. This will give us for each time frame considered, the “key activation event”.

Closing Notes:

Because Data Science and Machine Learning are so sexy today, everyone wants to run regression modeling. Regression analyses are simple, interesting and fun. However they lead to suboptimal results as they maximize for likelihood of the outcome rather than correlation strength.

Unfortunately, this is not necessarily a native capability with most analytics solutions but you can easily dump all of your data in redshift and run an analysis to mimic this approach. Alternatively, you can create funnels in Amplitude and feed the data into a spreadsheet to run the required cross-funnel calculations. Finally you can always reach out to us.

Don’t be dogmatic! The results of these analyses are guidelines and it is more important to pick one metric to move otherwise you might spiral down into an analysis-paralysis state.

Analysis << Action
Remember, an analysis only exists to drive action. Ensure that the events you push through the analysis are actionable (don’t run this with “email opened”-type of events). You should always spend at least 10x more time on setting up the execution part of this “key activation event” than on the analysis itself. As a reminder, here are a couple “campaigns” you can derive from your analysis:

  • Create a behavioral onboarding drip (case study)
  • Close more delighted users by promoting your premium features
  • Close more delighted users by sending them winback campaigns after their trial (50% of SaaS conversions happen after the end of the trial)
  • Adapt your sales messaging to properly align with the user’s stage in the lifecycle and truly be helpful

Images:
– MadKudu Grader (2015)
– MadKudu “Happy Path” Analysis Demo Sample

The “Lean Startup” is killing growth experiments

Over the past few years, I’ve seen the “Lean Startup” grow to biblical proportions in Silicon Valley. It has introduced a lot of clever concepts that challenged the old way of doing business. Even Enterprises such as GE, Intuit and Samsung are adopting the “minimum viable product” and “pivoting” methodologies to operate like high-growth startups. However just like any dogma, the “lean startup” when followed with blind faith leads to a form of obscurantism that can wreck havoc.

Understanding “activation energy”

A few weeks ago, I was discussing implementing a growth experiment with Guillaume Cabane, Segment’s VP of Growth. He wanted to be able to pro-actively start a chat with Segment’s website visitors. We were discussing what the MVP for the scope of the experiment should be.

I like to think of growth experiments as chemical reactions, in particular when it comes to the activation energy. The activation energy is commonly used to describe the minimum energy required to start a chemical reaction.

The height of the “potential barrier”, is the minimum amount to get the reaction to its next stable state.

In Growth, the MVP should always be defined to ensure the reactants can hit their next state. This requires some planning which at this stage sounds like the exact opposite of the Lean Startup’s preaching: “ship it, fix it”.

The ol’ and the new way of doing

Before Eric Ries’s best seller, the decades-old formula was to write a business plan, pitch it to investors/stakeholders, allocate resources, build a product, and try as hard as humanly possible to have it work. His new methodology prioritized experimentation over elaborate planning, customer exposure/feedback over intuition, and iterations over traditional “big design up front” development. The benefits of the framework are obvious:
– products are not built in a vacuum but rather exposed to customer feedback early in the development cycle
– time to shipping is low and the business model canvas provides a quick way to summarize hypotheses to be tested

However the fallacy that runs rampant nowadays is that under the pretense of swiftly shipping MVPs, we reduce the scope of experiments to the point where they can no longer reach the “potential barrier”. Experiments fail and growth teams get slowly stripped of resources (this will be the subject for another post).

Segment’s pro-active chat experiment

Guillaume is blessed with working alongside partners who are willing to be the resources ensuring his growth experiments can surpass their potential barrier.

The setup for the pro-active chat is a perfect example of the amount of planning and thinking required before jumping into implementation. At the highest level, the idea was to:
1- enrich the visitor’s IP with firmographic data through Clearbit
2- score the visitor with MadKudu
3- based on the score decide if a pro-active sales chat should be prompted

Seems pretty straightforward, right? As the adage goes “the devil is in the details” and below are a few aspects of the setup that were required to ensure the experiment could be a success:

  • Identify existing customers: the user experience would be terrible is Sales was pro-actively engaging with customers on the website as if they were leads
  • Identify active opportunities: similarly, companies that are actively in touch with Sales should not be candidates for the chat
  • Personalize the chat and make the message relevant enough that responding is truly appealing. This requires some dynamic elements to be passed to the chat

Because of my scientific background I like being convinced rather than persuaded of the value of each piece of the stack. In that spirit, Guillaume and I decided to run a test for a day of shutting down the MadKudu scoring. During that time, any visitor that Clearbit could find information for would be contacted through Drift’s chat.

The result was an utter disaster. The Sales team ran away from the chat as quickly as possible. And for a good cause. About 90% of Segment’s traffic is not qualified for Sales, which means the team was submerged with unqualified chat messages…

This was particularly satisfying since it proved both assumptions that:
1- our scoring was a core component of the activation energy and that an MVP couldn’t fly without it
2- shipping too early – without all the components – would have killed the experiment

This experiment is now one of the top sources of qualified sales opportunities for Segment.

So what’s the alternative?

Moderation is the answer! Leverage the frameworks from the “Lean Startup” model with parsimony. Focus on predicting the activation energy required for your customers to get value from the experiment. Define your MVP based on that activation energy.

Going further, you can work on identifying “catalysts” that reduce the potential barrier for your experiment.

If you have any growth experiment you are thinking of running, please let us know. We’d love to help and share ideas!

Recommended resources:
https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
https://hbr.org/2016/03/the-limits-of-the-lean-startup-method
https://venturebeat.com/2013/10/16/lean-startups-boo/
http://devguild.heavybit.com/demand-generation/?#personalization-at-scale

Images:
http://fakegrimlock.com/2014/04/secret-laws-of-startups-part-1-build-right-thing/
https://www.britannica.com/science/activation-energy
https://www.infoq.com/articles/lean-startup-killed
https://en.wikipedia.org/wiki/Activation_energy

Improve your behavioral lead scoring model with nuclear physics

According to various sources (SiriusDecision, SpearMarketing) about 66% of B2B marketers leverage behavioral lead scoring. Nowadays we rarely encounter a marketing platform that doesn’t offer at least point based scoring capabilities out of the box.

However, this report by Spear Marketing reveals that only 50% of those scores include an expiration scheme. A dire consequence is that once a lead has reached a certain engagement threshold, the score will not degrade. As put it in the report, “without some kind of score degradation method in place, lead scores can rise indefinitely, eventually rendering their value meaningless.” We’ve seen this at countless companies we’ve worked with. It is often a source of contention between Sales and Marketing.

So how do you go about improving your lead scores to ensure your MQLs get accepted and converted by Sales at a higher rate?

Phase 1: Standard Lead scoring

In the words of James Baldwin, “If you know whence you came, there are absolutely no limitations to where you can go”. So let’s take a quick look at how lead scoring has evolved over the past couple of years.

Almost a decade ago, Marketo revolutionized the marketing stack by giving marketers the option to build heuristical engagement models without writing a single line of code. Amazing! A marketer, no coding skills required, could configure and iterate over a function that scored an entire database of millions of leads based on specific events they performed.

Since the introduction of these scoring models, many execution platforms have risen. The scoring capability has long become a standard functionality according to Forester when shopping for marketing platforms.

This was certainly a good start. The scoring mechanism had however 2 major drawbacks over which much ink has been spilt:

  • The scores don’t automatically decrease over time
  • The scores are based on coefficients that were not determined statistically and thus cannot be considered predictive

Phase 2: Regression Modeling

The recent advent of the Enterprise Data Scientist, formerly known as the less hype Business Analyst, started a proliferation of lead scoring solutions. These products leverage machine learning techniques and AI to accommodate for the previous models inaccuracies. The general idea is to solve for:  

Y = ∑𝞫.X + 𝞮

Where:

Y is the representation of conversion
X are the occurrences of events
𝞫 are the predictive coefficients

 

So really the goal of lead scoring becomes finding the optimal 𝞫. There are many more or less sophisticated implementations of regression algorithms to solve for this, from linear regression to trees, to random forests to the infamous neural networks.

Mainstream marketing platforms like Hubspot are adding to their manual lead scoring some predictive capabilities.

The goal here has become helping marketers configure their scoring models programmatically. Don’t we all prefer to blame a predictive model rather than a human who hand-picked coefficients?!

While this approach is greatly superior, there are still a major challenge that need to be addressed:

  • Defining the impact of time on the scores

After how long does having “filled a form” become irrelevant for a lead? What is the “thermal inertia” of a lead, aka how quickly does a hot lead become cold?

Phase 3: Nuclear physics inspired time decay functions

I was on my way home some time ago, when it struck me that there was a valid analogy between Leads and Nuclear Physics. A subject in which my co-founder Paul holds a masters degree from Berkeley (true story). The analogy goes as follows:
Before the leads starts engaging (or being engaged by) the company, it is a stable atom. Each action performed by the lead (clicking on a CTA, filling a form, visiting a specific page) results in the lead gaining energy, thus furthering it from its stable point. The nucleus of an unstable atom will start emitting radiation to lose the gained energy. This process is called the nuclear decay and is quite well understood. The time taken to free the energy is defined through the half-life (λ) of the atom. We can now for each individual action compute the impact over time on leads and how long the effects last.

Putting all the pieces together we are now solving for:

Y = ∑𝞫.f(X).e(-t(X)/λ) + 𝞮

Where:

Y is still the representation of conversion
X are the events
f are the features functions extracted from X
t(X) is the number of days since the last occurrence of X
𝞫 are the predictive coefficients
λ are the “half-lives” of the events in days

 

This approach yields better results (~15% increase in recall) and accounts very well for leads being reactivated or going cold over time.

top graph: linear features, bottom graph: feature with exponential decay

 

Next time we’ll discuss how unlike Schrödinger’s cat, leads can’t be simultaneously good and bad…

 

Credits:
xkcd Relativistic Baseball: https://what-if.xkcd.com/1/
Marketo behavioral lead score: http://www.needtagger.com
Amplitude correlation analysis: http://tecnologia.mediosdemexico.com
HubSpot behavioral lead score: http://www.hubspot.com
MadKudu: lead score training sample results