How New Is Relationship Scoring?

October 27th, 2020
Rich Raquet | Chairman, TRC
Hero Image: How New Is Relationship Scoring?

I was reading Russell Perkins’s blog (Russell and his firm Infocommerce group help clients develop product strategy) about the use of Customer Lifetime Value (CLV) in Relationship Scoring. He takes note of a new trend to apply CLV across all customer touchpoints.

As researchers, the concept of CLV is something we are quite familiar with. It seeks to take into account everything known about a customer (these might include factors ranging from past purchase and payment behavior to things like credit score, income, or level of education) in order to determine the value that customer is likely to bring over their lifetime.

Relationship scoring then uses this to determine how the customer should be treated. High-value customers are given opportunities from better customer service to special offers. Is this idea really all that new?

In many respects, it is not. Long before advanced algorithms firms recognized that some customers were more valuable than others. For example, a good butcher knew how important each customer was and provided perks to them (like setting aside the best cuts of meat).

As firms grew and customer interactions became less personal, they still adopted the same sort of logic. Over a decade ago our former colleague and academic partner, Vikas Mittal (now at Rice University) published an article in HBR outlining how firms should consider firing customers who provide no value.

What is new is that thanks to big data, even large firms can customize the relationship they have with their customers. This goes beyond a simple means of firing bad customers (though that is still part of it). The wealth of data now available allows them to, like the butcher, really know their customers and to do so without actually meeting them.

I wonder how many firms are using research data to score their customer records and with that enhance their algorithm. For example, a Max-Diff or Bracket™ exercise could determine the type of incentive different segments of customers might like or a conjoint could measure which price points would maximize revenue. This sort of database scoring using segmentation results is something we’ve done for years.

There are those who argue that this sort of plan will be yet another means of widening the gap between the so-called “1%” and the rest of us. If the measure was only based on wealth this might well be the case, but that assumes that the wealthy make the best customers. While in some cases that might well be true (especially in say luxury goods markets) in most it isn’t always going to be. Wealthy customers may well have more to spend but they are also often very demanding. So for them to be profitable, margins need to be wider than they otherwise might be.

This is just the first of what I suspect will be many stories about the use, and yes the abuse, of big data. It’s going to be really interesting to see where these new capabilities take us.