More and more training departments are considering the use of the Net Promoter Score as a question--or the central question--on their smile sheets.
This is one of the stupidest ideas yet for smile sheets, but I understand the impetus--traditional smile sheets provide poor information. In this blog post I am going to try and put a finely-honed dagger through the heart of this idea.
What is the Net Promoter Score?
Here's what the folks who wrote the book on the Net Promoter Score say it is:
The Net Promoter Score, or NPS®, is based on the fundamental perspective that every company’s customers can be divided into three categories: Promoters, Passives, and Detractors.
By asking one simple question — How likely is it that you would recommend [your company] to a friend or colleague? — you can track these groups and get a clear measure of your company’s performance through your customers’ eyes. Customers respond on a 0-to-10 point rating scale and are categorized as follows:
- Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
- Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
- Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.
To calculate your company’s NPS, take the percentage of customers who are Promoters and subtract the percentage who are Detractors.
So, the NPS is about Customer Perceptions, Right?
Yes, its intended purpose is to measure customer loyalty. It was designed as a marketing tool. It was specifically NOT designed to measure training outcomes. Therefore, we might want to be skeptical before using it.
It kind of makes sense for marketing right? Marketing is all about customer perceptions of a given product, brand, or company? Also, there is evidence--yes, actual evidence--that customers are influenced by others in their purchasing decisions. So again, asking about whether someone might recommend a company or product to another person seems like a reasonable thing to ask.
Of course, just because something seems reasonable, doesn't mean it is. Even for its intended purpose, the Net Promoter Score has a substantial number of critics. See wikipedia for details.
But Why Not for Training?
To measure training with a Net-Promoter approach, we would ask a question like, "How likely is it that you would recommend this training course to a friend or colleague?"
Some reasonable arguments for why the NPS is stupid as a training metric:
- First we should ask, what is the causal pathway that would explain how the Net Promoter Score is a good measure of training effectiveness? We shouldn't willy-nilly take a construct from another field and apply it to our field without having some “theory-of-causality” that supports its likely effectiveness.
Specifically we should ask whether it is reasonable to assume that a learner's recommendation about a training program tells us SOMETHING important about the effectiveness of that training program? And, for those using the NPS as the central measure of training effectiveness--which sends shivers down my spine--the query than becomes, is it reasonable to assume that a learner's recommendation about a training program tells us EVERYTHING important about the effectiveness of that training program?
Those who would use the Net Promoter Score for training must have one of the following beliefs:
- Learners know whether or not training has been effective.
- Learners know whether their friends/colleagues are likely to have the same beliefs about the effectiveness of training as they themselves have.
The second belief is not worth much, but it is probably what really happens. It is the first belief that is critical, so we should examine that belief in more depth. Are learners likely to be good judges of training effectiveness?
- Scientific evidence demonstrates that learners are not very good at judging their own learning. They have been shown to have many difficulties adequately judging how much they know and how much they’ll be able to remember. For example, learners fail to utilize retrieval practice to support long-term remembering, even though we know this is one of the most powerful learning methods (e.g., Karpicke, Butler, & Roediger, 2009). Learners don’t always overcome their incorrect prior knowledge when reading (Kendeou & van den Broek, 2005). Learners often fail to utilize examples in ways that would foster deeper learning (Renkl, 1997). These are just a few examples of many.
- Similarly, two meta-analyses on the potency of traditional smile sheets, which tend to measure the same kind of beliefs as NPS measures, have shown almost no correlation between learner responses and actual learning results (Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008).
- Similarly, when we assess learning in the training context at the end of learning, several cognitive biases creep in to make learners perform much better than they would perform if they were in a more realistic situation back on the job at a later time (Thalheimer, 2007).
- Even if we did somehow prove that NPS was a good measure for training, is there evidence that it is the best measure? Obviously not!
- Should it be used as the most important measure. No! As stated in the Science of Training review article from last year: “The researchers [in talking about learning measurement] noted that researchers, authors, and practitioners are increasingly cognizant of the need to adopt a multidimensional perspective on learning [when designing learning measurement approaches].”
Salas, Tannenbaum, Kraiger, & Smith-Jentsch, 2012). - Finally, we might ask are there better types of questions to ask on our smile sheets? The answer to that is an emphatic YES! Performance-Focused Smile Sheets provide a whole new approach to smile sheet questions. You can learn more by attending my workshop on how to create and deploy these more powerful questions.