Leading and lagging indicators of customer success

There was a recent discussion over on the Customer Success LinkedIn group about defining leading and lagging indicators for customer success.

Here’s my take on which is which, through the prism of churn risk.

Indicators or outcome measures?

Before we get to indicators, let’s start with defining the outcome measures.  These are the standard business metrics that are used to measure success. They sometimes get confused with leading indicators.

Some examples:

  • Logo and revenue renewal rates
  • Churn rate
  • Period-over-period revenue growth per customer
  • Lifetime value

Therefore, indicators aren’t financial metrics so much as they are operational measures.  And the best indicators are the ones you can link to the outcomes you care about.

For example, take your churn events: can you unpack your churn outcomes to spot the leading and lagging indicators in retrospect?

Lagging indicators

If we’re looking for churn indicators, think of lagging indicators as the evidence of customer risk that could turn into a bad outcome in the near term.

Some examples:

  • Account escalation
  • Low license utilization
  • Negative feedback / surveys near a renewal date
  • Refund requests / discount requests
  • Account downsell

Leading indicators

Think of leading indicators as the earliest signs of a customer struggling to achieve value and success.

It’s easiest to conceive of early indicators when the customer relationship itself is early:

  • Slow time to first value
  • Slow initial adoption
  • Negative feedback and/or low survey scores
  • High volumes of support tickets (depending on what’s in them)

Others indicators can be signs of churn risk even when the relationship is otherwise stable:

  • Declining adoption
  • Negative feedback and surveys
  • Lack of engagement


There are plenty of indicators you can pay attention to; too many, in fact.  So the goal is to focus on a subset.  

Start with just one outcome that matters most.  For example, “flat or reduced renewals”.

Unpack that outcome to spot the indicators.  Get good at monitoring them, and responding to them reliably.

Once you’ve established some focus, you face a choice. Stay the course or introduce additional outcome measures with their indicators? Regardless, start simple.

7 uncomfortable questions about user adoption

TLDR: If you’re like many folks in Customer Success, you spend most of your days working with key stakeholders at your customer accounts.  People like business sponsors, department leaders, admins and projects managers.

Suddenly, the week is over and you haven’t spent a minute with the actual users of your product.  

If you believe – as I do – that product adoption is the most important driver of customer retention and growth, then it’s important to confront this reality.

Continue reading

Do you have a User 360?

TLDR: Many companies have assembled a “Customer 360” profile from various data sources, and deployed tools to utilize that information.  Customer Success apps are a good example of leveraging that data.  However, many companies don’t yet have a “User 360” that drives different – and highly valuable – customer insights and engagement.

Continue reading

“Train the Trainer” is a terrifying term

TLDR: If you provide software, data services or other online services to businesses, you might be familiar with the term “train the trainer”.  It’s a time-honored approach to deploying software to new users.  It should also strike terror in the hearts of vendors.

If you believe, as we do, that adoption of your product is critical to customer retention and growth, then training might be the most pivotal adoption milestone of all.  If it’s so important, can you entrust it to your customer?

Continue reading

The future of business software

Tom Tunguz wrote an insightful blog recently comparing legacy software applications with the new, disruptive ones.

He writes:

“A senior SaaS executive once told me, “Reports sell software.” In a top down sale, that’s absolutely true. The CEO wants better predictability of bookings so she’ll buy a CRM tool to gather the data. Classically, software has been built for that mantra.

In bottoms up sales, workflow sells software. And new SaaS companies who aim to displace incumbent systems of record will architect their products in a radically different way. They will be event-driven SaaS companies (emphasis is mine).”

I couldn’t agree more.  

My start-up’s product is (was) event-driven

In the case of Bluenose, we were trying to help you unlock the value of user feedback (in the form of NPS surveys) and user behavior (in the form of product usage data).  

This data should flow into your company continuously, and produces many valuable signals:

  • As a signal about the health of your relationships with your users at a macro level
  • As a signal about the health of your relationship with each user
  • As a signal about where each user stands in their adoption journey

How can you use these signals?

The first signal unlocks the drivers of NPS, retention and churn in your business.

The second signal mobilizes your Customer Success team or guides your contact center agents.

The third signal enables you to target each user with relevant messaging on how to take their next adoption steps.

How does Tom’s thinking apply to your business?

If your job is to improve customer retention, “event-driven” is a provocative way to think about your customers and the events that should drive your engagement with them. The design of your customer-facing processes should be event-driven for sure.

If you’re in the role of designing products, it’s a clue about how to disrupt incumbent competitors (or fight off the upstarts if you’re being disrupted), by thinking about the events that should drive your app’s features.

If you’re in sales, it’s a way to frame your product as being different – and more valuable – than an incumbent product that doesn’t utilize events to drive a business process.

A final thought

One of the ways I like to think about customer events is how they drive scores.

You’re forecasting a customer renewal.  Should the forecast probability (a score) be based on customer events?

  • Sustained use of your app
  • Survey responses

You’re scoring each customer’s health as part of a weekly Customer Success team meeting.  Should the customer’s health score be based on events?

  • Recent use of your app
  • Recent responses to a survey
  • Recent support tickets
  • Changes in the customer’s team

Not only do events make for more accurate scores. They also pinpoint what’s changed in a score and make the next customer touch much more obvious.