The Potential and Pitfalls of Net Promoter Scores (NPS) as a “Business World” Metric in Academic Assessment

Abstract

The Net Promoter Score (NPS) was introduced in 2003 and has since been embraced by a vast majority of major companies worldwide. However, higher education has been slower to adopt this simple metric, in part because students aren’t exactly customers. Although they can choose among the many educational institutions and resources that are available just like corporations’ customers, students are also developed and shaped by the institution they enter and may not understand how the educational rigors coalesce to sharpen their intellect and expand their knowledge bases. Yet the imperfection of the student as an evaluator of the educational experience must not blind tertiary institutions to the insights that come from the persons living those experiences, and NPS is uniquely poised to tap into such insights in an elegantly simplistic manner using a single number coupled with an explanatory statement. A case study is presented to offer initial impressions of the potential and pitfalls of using NPS in academic assessment.

Introduction

The eternal debate whether tertiary-level institutions should primarily consider students as consumers or products of higher education seems to have been resolved, at least temporarily, by a virus. By necessity, due to precipitously declining enrollments, institutions of higher education are increasingly focused on attracting and retaining students because the process of education can’t occur unless students first enroll in classes (Elliott and Healy, 2001).

The Sky Is Falling…

Shockingly, the January 2022 headline of an article in the business magazine Forbes declared, “College Undergraduate Enrollment Has Decreased By More Than One Million Students Since 2019” (Nietzel, 2022). The National Student Clearinghouse Research Center (2022) quantified these enrollment losses across all tertiary institution sectors in the United States, blaming much of the slide on the effects of the COVID-19 global pandemic.

Even more alarming from an assessment perspective was a February 2022 article in Inside Higher Ed that reported a bill, approved by the Florida Senate Education Committee, which stipulates that public colleges and universities in that state would be required to change regional accrediting bodies at the end of each accreditation cycle (Whitford, 2022). Such “freedom of choice” notions are reflective of an increasingly popular political view of students-as-customers in higher education.

However, treating post-secondary students as consumers/customers is inherently problematic for institutions of higher education, whether in an effort to boost enrollment figures or as a way to placate external stakeholders. Calma and Dickson-Deane (2020) have asserted that typical business measures of customer satisfaction contradict “the principal aims and measures of quality in higher education” (p. 1221). These researchers argue that students are “learners within” and not merely “purchasers of” the degrees and certifications being sought (p. 1225). Although the rigors of academic training may not be entirely palatable in the short-run for students and those who are invested in their success, in a properly designed curriculum each course ultimately proves its worth within the awarded degree.

The Voice of the Student…

While acknowledging an array of stakeholders that include employers, regulatory bodies, and the students themselves as influencing the determinants of academic quality and value, there remains the practical matter of how to manage the diversity of opinions and rationales that underlie any resulting data. One way that colleges and universities have traditionally attempted to capture the “voice of the student” is through student evaluations of teaching (SETs) at the course and instructor level of analysis (Cheng and Marsh, 2010).

SETs have long been used not merely for assessing students’ perceptions of education quality, but have also formed the basis for compensation, ranking, and promotion decisions (Baker et al., 2015, p. 31). These impacts have led critics to query whether faculty efforts to improve student evaluations of their courses may result in grade inflation and other questionable responses that reduce the quality of instruction instead of elevating it (Wang and Williamson, 2020). Extraneous factors unrelated to instructional excellence, such as the gender of the instructor and course type, may further invalidate student-generated evaluations of teaching effectiveness (Whitworth et al., 2002). In addition, Cheng and Marsh (2010) concluded that within universities, the relatively small number of responses from students in certain classes very likely precludes reliable differentiation between courses based solely on SET data.

This leaves tertiary institutions in a difficult position of trying to interpret and gauge the “voice of the student” at the end of every course as a quality assurance measure for classroom instruction (Seale, 2009) while simultaneously monitoring overall student satisfaction for retention purposes (Cheng and Marsh, 2010).

An additional concern with SETs involves a difference between the number of factors measured in each evaluation versus the actual number considered by administrators when determining faculty performance (Baker et al., 2015; Whitworth et al., 2002). Administrative decisions may rest on only one or two factors of the many that are typically measured (Emery et al., 2003). SETs may also be used to capture written comments by students, but condensing and analyzing such qualitative information further complicates the effective use of SETs by administrators (Alhija and Fresko, 2009).

The Simplicity of NPS…

To address some of these complications, a question was added to the end-of-semester SETs at a small public tertiary institution: On a scale of 0 to 10, how likely are you to recommend this class to a friend or colleague?

It’s a simple question in the Net Promoter Score (NPS) format first introduced by Frederick Reichheld (2003) in a Harvard Business Review article. In his justification for the metric, Reichheld asserted that “…the percentage of customers enthusiastic enough about a company to refer it to a friend or colleague directly correlated with growth rates among competitors.” His research revealed that it was enthusiasm, more than customer satisfaction or loyalty, that was the leading indicator of a company’s relative success.

NPS scores are calculated in a manner that isn’t particularly intuitive. Only persons providing a rating of 9 or 10 are considered “Promoters.” Scores of 7 or 8 are considered neutral and are not factored into the NPS calculation. Persons giving scores of 6 or less are called “Detractors.” The NPS is defined as the percentage of Promoters minus the percentage of Detractors. The resulting NPS will fall along a -100 (all Detractors) to +100 (all Promoters) range with zero as the neutral midpoint. Accordingly, any positive number is beneficial, with larger numbers signaling greater enthusiasm. On the other hand, slightly negative numbers aren’t necessarily bad. They just mean there won’t likely be much positive word of mouth being generated by those respondents. But large negative numbers would certainly be an indication of seriously unhappy detractors (Reichheld, 2003).

Geoff Colvin (2020) noted in Fortune magazine that at least two-thirds of the companies on the Fortune 1000 list were using Net Promoter Scores, which one IBM executive referenced as being closer to a religion than a metric. And with businesses arguably being the ultimate “consumers” of our university student “products,” NPS may well be a metric that deserves greater attention within higher education.

However, higher education’s unique situation where students are both products and customers complicates the simple adoption of this business metric. And although an NPS question now appears in virtually every business sector’s customer satisfaction survey, should such questions be used as part of student evaluations of university-level teaching and learning?

Trial 1: Spring 2021

That tertiary institution began incorporating NPS scoring into end-of-semester course evaluations during Spring 2021 when its president first suggested including the metric. In that semester, only the NPS rating question was asked. Unfortunately, a cursory scan of those initial NPS-only scores immediately revealed perplexing inconsistencies.

For example, one very popular and highly engaged science instructor earned a high 66.7 score in one entry-level course, but low scores ranging from 12.5 to -22 in her other classes. Oddly, the very few written comments for this instructor offered minimal insights to understand why such low NPS scores were given. In her highest rated class, the only written comment offered was a single “No” to the question, “Do you have any additional comments/feedback?”

Trial 2: Summer 2021

To alleviate the lack of insight generated by the Spring 2021 NPS scores, an additional item was added to immediately follow the NPS question in the students’ subsequent semester course evaluation instruments. For the Summer and Fall 2021 semesters, the follow-up question “Why did you select that number for your recommendation of this class?” was included.

The addition of the resulting student narratives arising from the NPS follow-up question were striking in their completeness. Compared to the almost complete lack of written comments in the Spring 2021 course evaluations, virtually every student provided a meaningful answer to the NPS follow-up question. This greatly improved the interpretation of the assigned NPS scores and provided useful feedback for faculty performance evaluations.

Yet, accurately interpreting the results across all the classes taught by any one instructor remained difficult. One long-time and very popular business educator showed excellent NPS scores of 50 and 60 in two of her classes, but unexpectedly scored a -50 in a third course. On closer inspection, only two students were in that low-scoring class. One student’s NPS score was an 8 with the comment, “The teacher teaches well but sometimes it’s a lot of work considering the time frame between assignments.” The other student offered an NPS of 5 and wrote, “The class is good however, very similar to MKT402 and the MGT300. Which makes the course seem very repetitive.” These written comments allowed the instructor and her dean to focus on the substantive issues expressed by these students, rather than dismissing the inconsistently low score as a fluke.

In another instance, the highest-rated mathematics instructor received an NPS of 80 in one class, a -29 in another, and a -73 in a third class. In general, the comments in the highest rated class were written by pre-college students who required remedial mathematics instruction before being admitted into credit-bearing classes. Virtually all of those comments focused on the instructor’s patience and detailed explanations. In contrast, the lowest rated class was calculus and although most of the NPS scores were fives and sixes the comments showed that it was the brevity of the summer schedule and the difficulty of the course material that caused the low scores. In fact, as one student explained for the rationale underlying their NPS of 6, “Because the class is hard, but the teacher is good.”

Trial 3: Fall 2021

Again, for the Fall 2021 semester evaluations, the use of the NPS follow-up question provided insightful reasons for virtually every NPS score offered. However, the insights were occasionally unanticipated.

For example, an instructor teaching two sections of a science class received an NPS of 62.5 for one section and a -13 for the other. Both sections had the same number of students, but the high-scoring section had three times the number of Promoters as the low-scoring section, while the low-scoring section had three times the number of Detractors as the high-scoring section. The single detractor’s explanation in the high-scoring section read, “Because it is a very interesting course to learn about and if you study you can understand it.” But that was hardly the kind of reason one might expect for an NPS score of 5 from that person.

There was an even more interesting reason for a different student to have assigned an NPS of 1 in the low-scoring section. That student wrote, “Because not everyone is interested in sciences.” And although it isn’t particularly helpful in assessing the efficacy of that particular class, the answer is helpfully thought provoking when generally considering the relevance of using NPS in higher education course evaluations: Why indeed would anyone recommend a difficult course to friends and colleagues who aren’t interested in the subject?

Conclusions

For the institution described in this research, the benefits of having a single metric that generally encapsulates the overall student experience in a class has been a good way to quickly identify instructors who are outliers, and the follow-up question offers explanations that might not have otherwise been clarified by the rest of the data generated by the student’s evaluation of the class. The NPS figures and explanations also provide a relevant and deeper point of discussion when considering not just the instructor, but also the course design and its relevance in the curriculum.

Each semester’s data have been given only a cursory consideration at this point, but clearly a better appreciation for the potential and limits of using NPS in course evaluations will be gained when cross-sectional and comparative analyses are conducted across instructors, programs, and semesters of data. Investigating differences among student groups based on instructional level and various demographics may also provide further insights and considerations.

But it is clear that NPS must not be used as a single number without investigating the underlying reasoning behind each score. In large institutions hosting courses with hundreds of students in each section, word clouds could be used to provide a holistic view of those all-important explanations without unreasonable expenditures of time and effort.

Large corporations have embraced the use of NPS to clarify and to act upon the voice of the customer. And despite the inherent difficulty of fully assigning a customer role to students, the fact remains that college and university enrollments are declining because students have many ways to gain the knowledge they require to succeed in today’s world. NPS offers a simple, low-cost way for tertiary institutions to be more attentive and responsive to students’ observations and concerns.

Author(s)

J.D. Mosley Matchett
J.D. Mosley Matchett

Dr. J.D. Mosley-Matchett is the Provost and Vice President of Academic Affairs at the University College of the Cayman Islands. She joined the UCCI faculty in August 2009 as a seasoned educator with more than a decade of successful teaching experience at both the graduate and undergraduate levels. Her educational credentials include a Ph.D. in Business Administration and an MBA from the University of Texas at Arlington; a Juris Doctor (law degree) from Southern Methodist University in Dallas, Texas; and a Bachelor of Science in Electrical Engineering Technology from Old Dominion University in Norfolk, Virginia.

References

Alhija, F., & Fresko, B. (2009). Student evaluation of instruction: What can be learned from students’ written comments? Studies in Educational Evaluation, 35(1), 37-44.

Baker, D., Neely, W., Prenshaw, P., & Taylor, P. (2015). Developing a multi-dimensional evaluation framework for faculty teaching and service performance. Journal of Academic Administration in Higher Education, 11(2), 29-41.

Calma, A., & Dickson-Deane, C. (2020). The student as customer and quality in higher education. International Journal of Educational Management, 34(8), 1221–1235. https://doi.org/10.1108/ijem-03-2019-0093

Cheng, J. & Marsh, H. (2010). National Student Survey: Are differences between universities and course reliable and meaningful. Oxford Review of Education, 36, 693-712. DOI: 10.1080/03054985.2010.491179

Colvin, G. (2020, May 18). The simple metric that’s taking over big business. Fortune. Retrieved 9 26, 2021, from https://fortune.com/longform/net-promoter-score-fortune-500-customer-satisfaction-metric/

Elliott, K., & Healy, M. (2001). Key factors influencing student satisfaction related to recruitment and retention. Journal of Marketing for Higher Education, 10(4), 1–11. https://doi.org/10.1300/j050v10n04_01

Emery, C., Kramer, T., & Tian, R. (2003). Return to academic standards: a critique of student evaluations of teaching effectiveness. Quality Assurance in Education, 11(1), 37-46. DOI: 10.1108/09684880310462074

National Student Clearinghouse Research Center (2022, January 13). Current term enrollment estimates. Retrieved February 13, 2022 from https://nscresearchcenter.org/current-term-enrollment-estimates/

Nietzel, M. (2022, January 13). College undergraduate enrollment has decreased by more than one million students since 2019. Forbes. Retrieved February 13, 2022 from https://www.forbes.com/sites/michaeltnietzel/2022/01/13/college-update-undergraduate-enrollment-has-decreased-by-more-than-one-million-students-since-2019/

Reichheld, F. (2003, December). The one number you need to grow. Harvard Business Review. Retrieved September 26, 2021, from https://hbr.org/2003/12/the-one-number-you-need-to-grow.

Seale, J. (2009). Doing student voice work in higher education: An exploration of the value of participatory methods. British Educational Research Journal, 36, 995-1015.

Wang, G., & Williamson A. (2020). Course evaluation scores: valid measures for teaching effectiveness or rewards for lenient grading?, Teaching in Higher Education, https://doi.org/10.1080/13562517.2020.1722992

Whitford, E. (2022, February 11). Florida could make switching accreditors mandatory. Inside Higher Ed. Retrieved February 13, 2022 from https://www.insidehighered.com/news/2022/02/11/florida-bill-would-require-colleges-change-accreditors

Whitworth, J., Price, B., & Randall, C. (2002). Factors that affect college of business student opinion of teaching and learning. Journal of Education for Business, 77(5), 282–289. https://doi.org/10.1080/08832320209599677

One Response to The Potential and Pitfalls of Net Promoter Scores (NPS) as a “Business World” Metric in Academic Assessment

  1. Rendi May 1, 2023 at 10:05 am #

    I view something really interesting about your blog so I saved to bookmark

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress. Designed by WooThemes

Skip to toolbar