Back
Back

Free Sales Leads Trials and Activity Scoring: How to Do It Right

Free Sales Leads Trials and Activity Scoring: How to Do It Right

Redpoint Ventures Partner Tomasz Tunguz recently published the results of their Free Trial Survey, which included responses from 590 professionals working at freemium SaaS businesses of all sizes and shapes. The survey had many interesting takeaways and I recommend taking the time to dive into the slides that Tunguz shared at SaaStr Annual this year. One of the more interesting takeaways (that Tunguz discussed on his blog) was that activity scoring seems to negatively impact free trial conversions for high ACV leads. Tunguz found that Enterprise SaaS businesses using activity scoring see a 4% conversion rate for free trials vs. 15% for those not using activity scoring.[embed]https://www.slideshare.net/ttunguz/top-10-learning-about-free-trials/38-QuestionActivityScoring_inEnterprise9[/embed]MadKudu has written a lot about free trials in the past, including the article that inspired Tunguz referenced in launching his survey, so it was natural for us to weigh in on this conclusion.I asked for a few clarifications from Tunguz in preparing this article:

  • The survey defined activity scoring as lead scoring that leverages in-app activity (not email marketing, webinar engagement or other activities).
  • The conversion rate (4%/15%) is calculated against all leads, not leads that scored well, so we’re measuring the effectiveness of the funnel not the precision of the score itself.
  • We’re only looking at leads who participate in the free trial, not leads that schedule a demo or otherwise enter the funnel.

With that in mind, I believe there are two main takeaways and some data to support those conclusions.

Enterprise leads don’t want a Free Trial.

summary: our data shows Enterprise prefer to schedule a demo and self-serve prefer a free trial (if available). Putting either in their counterpart’s funnel negatively impacts their likelihood to convert. Free trial products design enterprise buyer journey calls-to-action - “contact sales” “schedule a demo” &  “request a quote” - in order to entice enterprise prospects. As Tunguz pointed out in the analysis of his survey, enterprise leads don’t typically try before they buy. They may sign up for a free trial to get a feel for the interface and do some preliminary feature validation, but the buying process is more sophisticated than that and lasts longer than your free trial. One hypothesis for why activity scoring decreases conversion for enterprise leads in free trials is that enterprise leads shouldn’t be running free trials - or at least, they shouldn’t be having the same free trial experience. It is worth reading Tunguz’s piece about assisted vs. unassisted free trials to dive deeper into this subject.Supporting this hypothesis is an experiment run by Segment & MadKudu looking at the impact of a free trials & demo requests on the likelihood that a self service & enterprise lead would convert. Segment Forced the Funnel by dynamically qualifying & segmenting website traffic, personalizing the website based on the predicted spend. This allowed us to predict whether traffic was self-serve or enterprise. “Self-serve traffic” would not see the option to schedule a demo while “enterprise traffic” would not see the option to sign up for a trial. They also ran a control to measure the impact on the funnel. They found a negative correlation between self-serve conversion & requesting a demo. They also found a negative correlation between enterprise conversion & signing up for a free trial.  Each buyer segment has an ideal customer journey and deviating from it (even into another buyer segment’s ideal journey) negatively impacts conversion. The converse is equally true: pushing leads into their ideal customer journey increases their conversion rate by 30%. Startups using activity scoring on high ACV leads should work to get high ACV leads out of their free trial by identifying them early on. Algolia, for example, prompts self-serve trial signups who have a high ACV to get in touch with someone for an assisted free trial.

Scoring activity at the account level

For SaaS businesses that go up-market or sell exclusive to enterprise, activity scoring at the lead level may not be sufficient. We worked with InVision to identify sales opportunities at the account level, importing all activity data from HubSpot & Segment and merging at the account level. We analyzed the impact that various user personas had on the buyer journey and product experience.

Profiles that were more likely to be active in the product  - Marketing, analysts & designers - had a less than average impact on the likelihood to convert. Personas associated with higher likelihood to convert - Directors, founders, CEOs - had a smaller impact on activation. Multiple personas are needed to create optimal conditions for an account to activate & convert on InVision. Their marketing team uses this knowledge to focus on post-signup engagements that will increase the Likelihood to Buy, the behavioral score built by MadKudu.

We see similar findings in the buyer journey as we examine how various personas’ involvement in an account impacts opportunity creation vs. opportunity closed-won. Opportunities are more likely to be created when marketers & designers are involved, but they are more likely to close when CEOs & Directors get involved. For InVision, interestingly enough, founders have a smaller impact on opportunity closed-won than they do on product conversion.

While a single lead may never surpass the activity threshold that correlated with sales readiness at InVision, scoring activity at the account level surfaced accounts that exceeded the account activity threshold. Both thresholds were defined by MadKudu & InVision using the same data sources. The above slides are all from our HubSpont Inbound 2018 talk and are available here.

Measuring Scoring Model effectiveness

Looking at the results of experiments run with our customers and the data from Tunguz’s survey, it’s clear that activity scoring doesn’t work in a vacuum. Both our MQA model for InVision & our models for Segment require firmographic, technographic and intent data in combination with behavioral data in order to build a predictive model.

The impact that a model will have on sales efficiency & velociate depends on its ability to identify X% of leads that represent Y% of Outcomes. The power of this function increases as X tends to 0 and Y tends towards 100. “Outcomes” can represent opportunities created, opportunities won, pipeline created, or revenue, depending on the metric your sales strategy is optimizing for. We similarly expect that the X% of leads will convert at a significantly higher percentage than lower-quality leads. As seen in the above graphic, a very good lead may be 17x as likely to convert than a low quality lead, which makes a strong case for sales teams prioritizing very good leads as defined by their predictive model - at least if they want to hit quota this quarter.

If you’re selling exclusively to enterprise leads, an assisted free trial operates a lot like a schedule a demo flow - you will score leads firmographically early on, evaluate the opportunity, and then assist them in onboarding to your product to trial it, scoring activity throughout the trial to evaluate likelihood to convert. Most SaaS don’t sell exclusively offer free trials to high ACV leads, which is why activity scoring becomes crucial. A lead that is a great fit for self-service is also a bad fit for enterprise. Self-service leads convert quickly and consistently with a small to medium sized ACV, whereas enterprise leads have a small percentage chance of converting, but convert with a much higher ACV. Velocity sales requires a steady stream of quick conversions - a long sales cycle for low ACV is a loss - while enterprise sales can take months or years to close a single account while still being a success. For customers with velocity & enterprise leads, MadKudu scores every lead against both models, which enables us to identify whether it’s a good fit for self-serve, enterprise, both or none (it’s almost never both).