Training Facebook to bid on your best leads

The Facebook Pixel is the gold standard of paid acquisition because of its powerful targeting AI. Retailers, for example, feed transactional data into Facebook’s AI to train its bidding engine. Then, Facebook optimize bidding for consumers who are most likely to buy from that retailer. The nearly instantaneously feedback loop enables fast iteration on paid acquisition strategies. You should never be bidding more on a lead than what they are worth to your business.

Facebook Pixel has some limitations, though, which can make it difficult for SaaS companies to fully leverage. Facebook only holds onto data from the past 28 days. This means that purchase data from sales cycles that take longer than 28 days cannot be fed back into Facebook’s AI. Lastly, Facebook’s AI learns faster when events happen sooner. There is a huge incentive for SaaS companies in particular to optimize towards an event higher in the funnel.

Madkudu’s AI is training Facebook’s AI.

This raises a bit of an issue for SaaS companies. Most are spraying and praying ad dollars. they bid low on a massive audience because they are unable to identify and optimize for high-quality leads.

Fast-growing SaaS companies like Drift are doing things differently. They feed MadKudu data into Facebook’s AI, enabling them to optimize bidding against leads which MadKudu would score high. In short, Madkudu’s AI is training Facebook’s AI.

Translating MadKudu data for Facebook

The goal is to feed transactional data to Facebook that it can use to optimize bidding against leads that we want. MadKudu’s predictive score identifies a lead’s value at the top of the funnel. We just need to capture that lead data as early as possible and send it in a way that Facebook understands.

There are two main attributes that Facebook is looking for to train its AI – an individual and its “value.” For eCommerce, that typically means feeding a purchase back to Facebook; however, we need to adapt our value a bit.

MadKudu is good at predicting the amount that a lead will spend based on historical deal data. This helps us differentiate between self-serve and enterprise leads, for example. Of course, not all leads will convert (even the very good ones), so in order to create our predicted value to send back to Facebook, we can adjust the predicted spend by the likelihood to convert (two variables MadKudu generates natively for all leads). The result is the following:

Lead Value = % likelihood to convert  x Predicted Spend

If a lead has a 10% chance to convert to $30,000 in ARR, we can send Facebook a “transaction” worth $3000 as soon as the lead gets generated. Now we can send data to Facebook nearly immediately to train its model. We’re training Facebook’s AI to value the same type of leads that we are valuing internally using MadKudu.

The easiest way to capture lead information with MadKudu is to use the MadKudu FastLane. It’s a simple line of javascript that turns any lead form into a dynamic customer fit-driven lead capture device. The same mechanism that helps Drift convert more leads into demo calls is training Facebook’s AI. Not bad.

The Impact: 300%

For Drift, the impact was clear and immediate. A 300% increase on conversion from Facebook spend means hitting a larger audience for cheaper. With MadKudu Fastlane sending transactional data back to the Facebook Pixel (via Segment), Drift is enabling Facebook to only spend on leads that MadKudu will score well, which Drift already knows to work well in predicting conversion to customers.

By building its growth & marketing foundation on top of MadKudu as a unified metric for predicted success, Drift is able to extend MadKudu to its paid acquisition leveraging our API and our many integrations. Connecting Facebook Pixel and MadKudu to Segment takes minutes. Afterwards, Drift can easily pipe MadKudu data to the Facebook Pixel in real-time.

Get in touch to learn more here.

Building a Shadow Funnel

Marketing is becoming an engineer’s game. Marketing tools come with Zapier integrations, webhooks and APIs. Growth engineers finely tune their funnel, each new experiment – an ebook, a webinar, ad copy or a free tool – plugging into or improving upon the funnel.

Growth engineers fill their top of their funnel by targeting prospects who look like they are a good fit for their product, but haven’t engaged yet. Guillaume Cabane, VP Growth at Drift, has been sharing his experiments leveraging intent data for years. Intent data allow Guillaume to discern the intentions of potential buyers by providing key data points into what they are doing or thinking about doing.

A quick review of the three main categories of Intent Data

  • Behavioral Intent: This includes 1st party review sites like G2Crowd, Capterra & GetApp, as well as Bombora, which aggregates data from industry publications & analysts. They provide Drift with data about which companies are researching their industry, their competitors, or Drift directly. (e.g: “Liam from MadKudu viewed Drift’s G2Crowd Page”)
  • Technographics: Datanyze, HGData & DemandMatrix provide data about companies that are installing & uninstalling technologies, tools or vendors (e.g: “MadKudu uninstalled Drift 30 days ago:)
  • Firmographics: Clearbit, Zoominfo & DiscoverOrg offer data enrichment tools starting from a website domain or email, providing everything from headquarter location to employee count.

In a standard buyer journey, the right message and medium depends on where a prospect is in the funnel:

  • Awareness: do they know about the problem you solve?
  • Consideration: are they evaluating how to solve a problem?
  • Decision: are they evaluating whether to use you to solve their problem?

Drift began looking at whether we could help them determine the next best action for every prospect and account in their total addressable market (TAM). TAM can be calculated as the sum of all qualified prospects who have engaged with you (MQLs) + all qualified prospects who have not engaged with you.

TAM = MQLs + SMQLs

I’ll call the latter Shadow MQLs (SMQLs), more precisely defined as any prospect that is showing engagement in your industry or in one of your competitors, but not you.

Drift already leveraged MadKudu to determine when & how to engage with MQLs in their funnel, but they needed to automate the next best action for SMQLs. Should a sales person call them? Or should Drift send them a personalized gift through Sendoso?

Our strategy for determining the next best action involved mapping intent data to the standard buyer journey stages. By doing this, we could build what I call a Shadow Funnel.

For this experiment, we focused on four intent data providers:

  1. G2Crowd: a review site that helps buyers to find the perfect solution for their needs. They send Drift data about who is looking at their category (live chat) or Drift’s page.
  2. SEMRush: a tool that provides information about the paid marketing budget of accounts.
  3. Datanyze: this gives us information about what tech are being used on websites.
  4. (Clearbit) Reveal: tells us the accounts that are visiting our website.

In order to build our shadow funnel, we need to define Shadow stages of the buyer journey:

  • Awareness: understands the industry you operate in.
  • Consideration: looking at specific vendors (not you).
  • Decision: evaluating specific vendors (not you).

MadKudu’s role in this funnel is to determine whether the SMQL is showing High, Medium, or Low predicted conversion. Here is a table illustrating the data points we mapped to each stage & fit level:

By matching Datanyze & G2Crowd data, for example, Drift can identify accounts who have uninstalled one of Drift’s competitors in the past 30 days and have begun researching the competition. Without ever visiting a Drift property (which would, in turn, enter them into Drift’s real funnel), MadKudu predicts a high probability that this account is in the process of considering a new solution in their space.

With a traditional funnel, the goal is to fill it and optimize for conversion down-funnel. Awareness campaigns drive traffic, acquisition campaigns drive email capture, and conversion campaigns increase sales velocity & conversion.

The goal of the Shadow Funnel is the opposite. Drift wants the funnel to be empty and to have everyone who is in it churn out.

Rephrasing our previous TAM equation, we can state the following:

TAM = Funnel + Shadow Funnel

Anyone who is in your TAM that isn’t in your funnel is in your Shadow Funnel, and anyone who is in your TAM that isn’t in your Shadow Funnel is therefore in your Funnel.

The goal then becomes to move horizontally:

  • we want Shadow prospects to move from Shadow Aware (i.e: aware of the industry) to Aware (of you).
  • we want prospects at the Shadow Decision stage (i.e: deciding which tool to use, that isn’t yours) to move to the Decision phase (i.e: deciding whether or not to use you).
  • And so on.

Once you know where your target audience is in the buyer process, you can deliver targeting messaging to pull them from the Shadow Funnel into your funnel.

Next Steps: evaluating intent as predictive behavior.

For now, the Shadow Funnel is a proof of concept. Through this method, Drift identified 1,000+ new qualified accounts to engage with. Once we have some historical data ti play with, our next step will be to build a model to determine which intent data sources are best at predicting Shadow Funnel conversion. We’ll also want to look at which engagement methods show the most promise.

Can the same engagement tactics that work on the traditional funnel work on the Shadow Funnel? Does the thought leadership retargeting ad on LinkedIn have the same impact if an account has never engaged with you before? Does looking at a category on G2Crowd reliably predict whether you’re interested in considering our product?

We are excited to continue to explore this with Drift and other SaaS companies leveraging intent data to engage qualified prospects who need their product before prospects engage with them. This is a natural evolution of the B2C strategies that eCommerce & travel companies have been employing in previous years, but tailored towards helping companies looking for answers get those answers faster.

We’ll be talking more about this strategy with Drift & Segment on our upcoming webinar here.

How we use Zapier to score Mailchimp subscribers

There’s no better way to get your story out there than to create engaging content with which your target audience identifies. At MadKudu, we love sharing data-driven insights and learnings from our experience working with Marketing Operations professionals, which has allowed us to take the value we strive to bring our customers every day and make it available to the marketing ops community as a whole.

As interest in our content has grown, it was only natural that we leverage Zapier in order to quickly understand who was signing up and whether we should take the relationship to the next level.

Zapier is a great way for SaaS companies like us to quickly build automated workflows around the tools we already use to make sure our customers have a frictionless relevant journey. We don’t want to push every Mailchimp subscriber to Salesforce, because not only would that create a heap of contacts that aren’t sales-ready, but we may end up inadvertently reaching out to contacts who don’t need MadKudu yet, giving them a negative first impression of us as a potential customer.

Today we are able to see who is signing up for our newsletter that sales should be paying attention to, and let’s see how:

Step 1: Scoring new newsletter subscribers

The first step is to make sure you grab all new subscribers. Zapier makes that super easy with their Mailchimp integration

Next we want to send those new subscribers to MadKudu to be analyzed. While MadKudu customers have a dedicated MadKudu integration, Zapier users who aren’t a MadKudu customer can also leverage Zapier’s native Lead Score app, which is (you guessed it) powered by MadKudu.

Step 2: Filter by Lead Score

We’ve got our MadKudu score already configured so after I feed my new subscriber to MadKudu, I’m going to run a quick filter to make sure we only do something if the Lead Score is “good” or “very good.”

If you’re worried that the bar will filter out potentially interesting leads, consider this a confidence test of your lead score.

Zapier Filtering by Lead Score Quality

Step 3: Take Action, Communicate!

For MailChimp signups that pass our Lead Score filter, we next leverage the SalesForce integration in Zapier to either find the existing contact inside Salesforce (they may already be there) or create a new lead. SalesForce has made this very easy to do with the “Find or Create Lead” action in Zapier.

Once we’ve communicated synced our Mailchimp lead to Salesforce, we use the Slack integration on Zapier to communicate everything we’ve created so far to a dedicated #notif-madkudu channel, which broadcasts all the quality leads coming from all of our lead generation channels.

Directly inside Slack, our team can get actionable insights:

  • The MadKudu score, represented as 3 Stars (normal stars for Good/ twinkling for Very Good)
  • The signals that MadKudu identified in this lead, both positive and negative
  • A link to the lead in Salesforce, for anyone who wants to take action/review

Actionable Lead Scoring applied to your Newsletter

Our goal here isn’t to reach out to newsletter subscribers – we want to build a long-term relationship with them, and we’re happy to keep delivering them quality content until their ready to talk about actionable lead scoring. What we’re able to do is see qualitatively & quantitatively the number of newsletter subscribers we have who are a good fit for MadKudu today.

This helps marketing & sales stay aligned on the same goal. Marketing is measuring newsletter growth with the same metric its using to measure SQL generation.

3 steps to determine the key activation event

Most people by now have heard of the “Product key activation event”. More generally, Facebook’s 7 friends in the first 10 days, Twitter’s 30 followers… get lots of mentions in the Product and Growth communities. Theses examples have helped cement the idea of statistically determining goals for the onboarding of new users. A few weeks ago, somebody from the Reforge network asked how to actually define this goal and I felt compelled to dive deeper into the matter.

I love this topic and while there’s already been some solid answers on Quora by the likes of Uber’s Andrew Chen, AppCues’ Ty Magnin and while I have already written about how this overarching concept a couple weeks ago (here) I wanted to address a few additional/tactical details.

Below are the three steps to identify your product’s “key activation event”.

Step 1: Map your events against the Activation/Engagement/Delight framework

This is done by plotting the impact on conversion of performing and not performing an event in the first 30 days. This is the core of the content we addressed in our previous post.

To simplify, I will call “conversion” the ultimate event you are trying to optimize for. Agreeing on this metric in the first place can be a challenge of itself…

Step 2: Find the “optimal” number of occurrences for each event

For each event, you’ll want to understand what is the required occurrence threshold (aka how many occurrences maximize my chances of success without hitting diminishing returns). This is NOT done with a typical logistic regression even though many people try and believe so. I’ll share a concrete example to show why.

Let’s look at the typical impact on conversion of performing an event Y times (or not) within the first X days:

There are 2 learnings we can extract from this analysis:
– the more the event is performed, the more likely to convert the users are (Eureka right?!)
– the higher the threshold of number of occurrences to perform, the closer the conversion rate of people who didn’t reach it is to the average conversion rate (this is the important part)

We therefore need a better way to correlate occurrences and conversion. This is where the Phi coefficient comes into play to shine!

Below is a quick set of Venn diagrams to illustrate what the Phi coefficient represents:

Using the Phi coefficient, we can find the number of occurrences that maximizes the difference in outcome thus maximizing the correlation strength:

Step 3: Find the event for which “optimal” number of occurrences has the highest correlation strength

Now that we have our ideal number of occurrences within a time frame for each event, we can rank events by their highest correlation strength. This will give us for each time frame considered, the “key activation event”.

Closing Notes:

Because Data Science and Machine Learning are so sexy today, everyone wants to run regression modeling. Regression analyses are simple, interesting and fun. However they lead to suboptimal results as they maximize for likelihood of the outcome rather than correlation strength.

Unfortunately, this is not necessarily a native capability with most analytics solutions but you can easily dump all of your data in redshift and run an analysis to mimic this approach. Alternatively, you can create funnels in Amplitude and feed the data into a spreadsheet to run the required cross-funnel calculations. Finally you can always reach out to us.

Don’t be dogmatic! The results of these analyses are guidelines and it is more important to pick one metric to move otherwise you might spiral down into an analysis-paralysis state.

Analysis << Action
Remember, an analysis only exists to drive action. Ensure that the events you push through the analysis are actionable (don’t run this with “email opened”-type of events). You should always spend at least 10x more time on setting up the execution part of this “key activation event” than on the analysis itself. As a reminder, here are a couple “campaigns” you can derive from your analysis:

  • Create a behavioral onboarding drip (case study)
  • Close more delighted users by promoting your premium features
  • Close more delighted users by sending them winback campaigns after their trial (50% of SaaS conversions happen after the end of the trial)
  • Adapt your sales messaging to properly align with the user’s stage in the lifecycle and truly be helpful

Images:
– MadKudu Grader (2015)
– MadKudu “Happy Path” Analysis Demo Sample

The “Lean Startup” is killing growth experiments

Over the past few years, I’ve seen the “Lean Startup” grow to biblical proportions in Silicon Valley. It has introduced a lot of clever concepts that challenged the old way of doing business. Even Enterprises such as GE, Intuit and Samsung are adopting the “minimum viable product” and “pivoting” methodologies to operate like high-growth startups. However just like any dogma, the “lean startup” when followed with blind faith leads to a form of obscurantism that can wreck havoc.

Understanding “activation energy”

A few weeks ago, I was discussing implementing a growth experiment with Guillaume Cabane, Segment’s VP of Growth. He wanted to be able to pro-actively start a chat with Segment’s website visitors. We were discussing what the MVP for the scope of the experiment should be.

I like to think of growth experiments as chemical reactions, in particular when it comes to the activation energy. The activation energy is commonly used to describe the minimum energy required to start a chemical reaction.

The height of the “potential barrier”, is the minimum amount to get the reaction to its next stable state.

In Growth, the MVP should always be defined to ensure the reactants can hit their next state. This requires some planning which at this stage sounds like the exact opposite of the Lean Startup’s preaching: “ship it, fix it”.

The ol’ and the new way of doing

Before Eric Ries’s best seller, the decades-old formula was to write a business plan, pitch it to investors/stakeholders, allocate resources, build a product, and try as hard as humanly possible to have it work. His new methodology prioritized experimentation over elaborate planning, customer exposure/feedback over intuition, and iterations over traditional “big design up front” development. The benefits of the framework are obvious:
– products are not built in a vacuum but rather exposed to customer feedback early in the development cycle
– time to shipping is low and the business model canvas provides a quick way to summarize hypotheses to be tested

However the fallacy that runs rampant nowadays is that under the pretense of swiftly shipping MVPs, we reduce the scope of experiments to the point where they can no longer reach the “potential barrier”. Experiments fail and growth teams get slowly stripped of resources (this will be the subject for another post).

Segment’s pro-active chat experiment

Guillaume is blessed with working alongside partners who are willing to be the resources ensuring his growth experiments can surpass their potential barrier.

The setup for the pro-active chat is a perfect example of the amount of planning and thinking required before jumping into implementation. At the highest level, the idea was to:
1- enrich the visitor’s IP with firmographic data through Clearbit
2- score the visitor with MadKudu
3- based on the score decide if a pro-active sales chat should be prompted

Seems pretty straightforward, right? As the adage goes “the devil is in the details” and below are a few aspects of the setup that were required to ensure the experiment could be a success:

  • Identify existing customers: the user experience would be terrible is Sales was pro-actively engaging with customers on the website as if they were leads
  • Identify active opportunities: similarly, companies that are actively in touch with Sales should not be candidates for the chat
  • Personalize the chat and make the message relevant enough that responding is truly appealing. This requires some dynamic elements to be passed to the chat

Because of my scientific background I like being convinced rather than persuaded of the value of each piece of the stack. In that spirit, Guillaume and I decided to run a test for a day of shutting down the MadKudu scoring. During that time, any visitor that Clearbit could find information for would be contacted through Drift’s chat.

The result was an utter disaster. The Sales team ran away from the chat as quickly as possible. And for a good cause. About 90% of Segment’s traffic is not qualified for Sales, which means the team was submerged with unqualified chat messages…

This was particularly satisfying since it proved both assumptions that:
1- our scoring was a core component of the activation energy and that an MVP couldn’t fly without it
2- shipping too early – without all the components – would have killed the experiment

This experiment is now one of the top sources of qualified sales opportunities for Segment.

So what’s the alternative?

Moderation is the answer! Leverage the frameworks from the “Lean Startup” model with parsimony. Focus on predicting the activation energy required for your customers to get value from the experiment. Define your MVP based on that activation energy.

Going further, you can work on identifying “catalysts” that reduce the potential barrier for your experiment.

If you have any growth experiment you are thinking of running, please let us know. We’d love to help and share ideas!

Recommended resources:
https://hbr.org/2013/05/why-the-lean-start-up-changes-everything
https://hbr.org/2016/03/the-limits-of-the-lean-startup-method
https://venturebeat.com/2013/10/16/lean-startups-boo/
http://devguild.heavybit.com/demand-generation/?#personalization-at-scale

Images:
http://fakegrimlock.com/2014/04/secret-laws-of-startups-part-1-build-right-thing/
https://www.britannica.com/science/activation-energy
https://www.infoq.com/articles/lean-startup-killed
https://en.wikipedia.org/wiki/Activation_energy