There’s a religious debate going on in the community of Customer Success vendors. My turn to weigh in.
On the one hand, some tools vendors are evangelizing the value of predictive analytics driven by product usage data. Other tools vendors are evangelizing the value of rules-based scoring driven by various factors including usage stats, survey scores, support tickets, and more.
While each vendor’s approach can be “right” under certain circumstances, each is generally wrong to characterize a health score as being driven by one approach versus the other.
My take: You need both.
Before I explain why, let’s get to first principles. Customer Success teams consistently say that they need to get away from “firefighting” mode to proactive customer engagement. Think of the health score as an “Early Warning System” that causes a Customer Success person to engage the customer at the earliest possible sign of caution, though direct contact or even a campaign.
Early warnings can come all along the customer journey, including during product trial, implementation and of course, as a precursor to renewal or retention.
The question then becomes, how to construct a score that satisfies key criteria:
- Runs continuously so that each customer is monitored all the time.
- Is suited for each stage of the customer lifecycle.
- Is suited for each type of customer in your customer base.
- Is trusted so that your team is inclined to react when a low score is generated.
So back to why neither group of vendors is “right.” In order to satisfy these criteria, you need multiple scoring methods. Some examples of different scoring methods for different lifecycle stages include:
- During a trial, you know little about a user. The only signal you might have is usage. So, a score for trial users needs to be driven by usage stats or a predictive algorithm to spot the users likely to convert or not.
- During onboarding & implementation, usage might be sparse. Especially if your product is complex and you deliver implementation services. You might, however, have a lot of support tickets as a signal that indicates if the customer is struggling to configure or adopt the product. Or, you’d look at early usage and tickets together.
Some examples that pertain to different product types and Customer Success models:
- You service a high-value customer tier. The health score should take into account total adoption of your product in each account and all the facets of your high-touch relationship with the account.
- You service a customer in a lower-value tier with a low touch service model. For example, one Customer Success Manager to 100+ customers. Usage data and maybe support tickets might be all you have to construct a score that spots at-risk customers.
So are predictive analytics a good thing? Of course. They can make for a better health score. However, let’s be clear about when and how they can useful.
First, the purpose of a predictive algorithm is to spot variances. In this case, your “most healthy” and “least healthy” users. If your product is not well instrumented, and you collect just a few usage event types, then an algorithm will be less reliable because it won’t spot variances in usage patterns. Conversely, usage data that contains more than 15 or more different event types will probably tease out interesting differences between users.
Second, if you don’t have a historical set of usage data collected, the algorithm will be less useful. You’re looking at correlations between usage patterns and outcomes such as churn, retention and renewal. One month’s worth of data won’t cut it.
Last, algorithms are most useful when you don’t have other facets of a customer relationship to rely upon. If you’re a Customer Success Manager with 20 assigned accounts, it’s doubtful you’d be surprised by an unhealthy customer nor would usage alone explain the health of that relationship.
In summary, for many Customer Success teams, customer health scores must take into account usage data plus something else. Also, if you want predictive algorithms to drive your score in part or in whole, be sure to instrument your product fully and build up a historical repository. Last, consider a health score driven by usage data alone only when you don’t have any other data to work from.