The NPS (Net Promoter Score) is currently one of the most widely used performance indicators. Democratized by Bain & Company, a leading consulting firm, it was developed to gauge loyalty between providers and consumers and is for many marketers the "One Number You Need to Grow" today.
But beyond the methodology and the business objectives it pursues, why is it interesting to re-normalize people answers to questions such as "How likely are you to recommend X or Y to a friend or a colleague" and what does it tell us about how individuals answer to surveys?
Unveiling hidden biases
Based on a random and anonymized data sample from Steerio - our team and project analytics platform - we observed the repartition of answers to the Steerio Ambassador Score question (equivalent to the NPS question but adjusted to measure project team engagement). People from different projects were asked to rate regularly, on a 0 to 10 scale, how likely they were to recommend their project to a colleague. The results of 884 data-points are presented on the following chart:
Looking at the stacks, we can directly notice that answers are skewed to the right-end side i.e. towards the "positive" values (the sample average is 7.08/10 which can be considered as a pretty good score). Would it be possible that the majority of projects are significantly above average? Probably not. But then, why team members would provide rather positive feedback while we hear at the coffee machine that so many things could be done better?
Maybe because many expressed feedbacks and opinions are skewed by cognitive and social biases, and especially by the courtesy bias. The courtesy bias is the tendency to give an opinion that is more socially correct than one's true opinion, so as to avoid offending anyone. In a nutshell, I prefer giving you a slightly positive grade, let's say 6 rather than a 4 or a 3 because it cost me less socially. Then it is up to you to sort the wheat from the chaff (Not my problem after all. You are the one that asked me for feedback).
Re-normalizing the data to get the right picture
Therefore, "re-normalizing" data by tagging 0-6 answers as "detractors", 7-8 as "neutral" and 9-10 as "promoters" enables to have equally distributed data which are more likely to match real opinions and thus potential future behaviors such as a recommendation act. Looking at our data, the normalization gives three almost equally distributed buckets with ~33% each (32%-35%-33%). This representation matches much more the reality of the discussions you could hear/have during the coffee break or on the Friday drinks, isn't it?