Tethr Blog

Stop asking your customers about effort and start listening for it instead

The question I am most frequently asked when I present the research from The Effortless Experience is “How should we measure the level of effort our customers are experiencing with our company?”

Well, the answer to that question has changed dramatically in the 10 years since our team at CEB (now Gartner) conducted the original research. In 2008 when we first reported on our findings, we recommended companies use a new metric—the Customer Effort Score—to gauge the level of effort in their customer experience. 

The Customer Effort Score, or CES for short, we found, proved to be more highly correlated with loyalty attitudes and behaviors like repurchase, share of wallet and advocacy than metrics like Net Promoter Score (NPS) or Customer Satisfaction (CSAT). Given the laser-like focus CX and service leaders have on finding the “perfect” metric to measure the customer experience, it came as no surprise that it was the idea of CES (even though it was only a small sidebar in the article) that generated the most buzz when we released the first in a series of Harvard Business Review articles on the research (see “Stop Trying to Delight Your Customers,” Harvard Business Review, July-August 2010).

The original CES was a survey question that asked customers how much effort they had to put forth to get their issue resolved. Customers rated their experience on a 1-5 scale where 1 was low effort (i.e., good) and 5 was high effort (i.e., bad). The idea was that by collecting CES scores from customers, companies would be able to zero in on those interactions and experiences that customers deemed to be “high effort,” thereby helping to surface improvement opportunities like training or coaching, QA scorecard changes, process fixes and website overhauls.

When we released the book, The Effortless Experience, in 2013, we unveiled a new version of the score which we called “CES 2.0.” We found that some companies felt the original question (“How much effort did you personally have to put forth to handle your request?”) could cause some customer confusion…and the term “effort” was hard to translate into certain languages. The new question asks the customer to respond on a 1-7 scale, from “strongly disagree” to “strongly agree,” with the statement “The company made it easy for me to handle my issue.” Not only is the wording more straightforward for customers and easier to translate from the original English, but the 1-7 scale and the fact that a low score was now “bad” and a high score “good” is more in keeping with the way other survey questions (like CSAT or Net Promoter) are asked.

While this new version of the CES question improved on the original, it didn’t solve for what is a more fundamental problem: relying on surveys to ask a question that companies should already know the answer to.

Surveys are a useful tool in assessing the customer experience, but as almost any CX leader will tell you, survey response rates are on a secular decline—in large part because companies over-rely on surveys to answer all manner of questions, resulting in customers experiencing survey fatigue and, ultimately, tuning them out. We hear from companies regularly that their response rates are plummeting—in some cases, dropping by half in just the past year.

In response to falling response rates, companies have taken the step of shortening their surveys—today, it’s not uncommon to see surveys with only one numerical question with an open-field text box asking for more color (e.g., “Why did you give us the score that you did?” or “What can we do to improve?”). While shorter surveys may temporarily stop the bleeding on response rates, they have the unintended effect of also diminishing the quality of the feedback that’s received—and this is to say nothing of the well-documented biases (e.g., recall bias and extreme response bias) that plague surveys as a VoC instrument.

The irony, of course, is that companies really shouldn’t have to survey customers about the level of Effort they’re experiencing. Customers have already provided all of the insight a company could possibly need in the interactions (the phone calls, chats, etc.) that preceded the survey…interactions which are recorded and captured by those same companies.

What if, instead of deploying surveys to customers after the fact, you could harness technology to scan all of those recorded conversations and predict the score a customer would have given on a survey without having to ask the customer to fill out the survey at all? If you could do this, you’d have no more response rate challenges since you could assign a score to every call, not just the ten percent of customers (or fewer) who fill out the survey. You’d have no bias issues because you’d be working off of the raw conversational data (not a post-hoc interpretation of what happened). And, best of all, you’d have an incredibly rich, actionable data set to work with (i.e., no more trying to decipher what the customer meant by “You guys rock!” or “You guys suck!”).

A few years ago, this sort of “surveyless survey” might have felt like science fiction. But, with advances in fields like computational linguistics and machine learning, what was once impossible isn’t just possible, it’s real and ready for companies to start taking advantage of.

At Tethr, we’re excited to announce the arrival of the market’s first machine learning-based, predictive Effort score: the Tethr Effort Index (TEI). You can read the press release here.

To build this, our data science team first coupled conversational data with completed survey responses from tens of thousands of customer interactions across a wide range of companies and industries. They then worked with my product team—which includes former CEB researchers deeply familiar with the original Effort research—to construct an exhaustive list of Effort-related machine learning categories that would serve as potential independent variables in the model.

We included variables already identified by the CEB team in the Effort research, including variables related to “do” effort (i.e., the things customers have to do to get their issues resolved—like calling back repeatedly, switching channels, repeating information, etc.) as well as “feel” effort (i.e., how the customer feels about what they had to do—for instance, frustration, confusion, missed expectations, etc.). We also included many of the language techniques identified in the original Effort research (e.g., advocacy, positive language, etc.) which have been demonstrated to have a positive effect on the customer’s perception of the experience.

One of the cool things about using recorded conversational data is that it’s a far richer data set than what we had access to in the original Effort research, which was all based on survey data. In a survey, there’s a natural limit to how much you can ask before a respondent gets impatient and bails out. For instance, in the original Effort research, we asked about channel switching (i.e., did a customer first go to the company’s website, give up and then pick up the phone to call?). As much as we would have liked to ask dozens of questions about the actual website experience (e.g., was it a login issue, an unclear FAQ, confusing information in an expert community or something else that caused the customer to give up?), we also wanted people to fill out the survey so that we could do the analysis.

With conversational data, however, this isn’t an issue. On the phone, customers will go into incredible depth about exactly what went wrong in their experience. Customers won’t just tell you it was a Website issue, but will tell you it was a login issue and what the specific error message was that they received. They won’t just tell you they found the content on the Website confusing. They’ll tell you which specific FAQ was confusing to them and why. With all of this rich, contextual data, our team was able to generate a truly massive list of potential variables that we could measure.

We cast a really broad net, in other words.

And casting a broad net is important because Effort, we’ve learned, isn’t something that can easily be reduced to a survey score. It’s nuanced—a condition that is comprised of many things with many flavors. Frustration is different from confusion. A transfer is different from a long hold. A rep hiding behind policy is different from a rep missetting expectations. Until we tapped into conversational data, we were never able to measure the additive effects of tonnage or intensity. Does it matter if a customer gets frustrated three times in a call, as opposed to just once? Where does it tip the scales from annoyance into actual churn risk? Without that field chalked, without that nuance in measuring, we’d never get that from any other method, survey or otherwise.

When all was said and done, the initial version of the TEI model is based on more than 100 variables together representing thousands of discrete phrases and utterances that proved to be statistically significant in either increasing or reducing effort. As we’ve been taking it out for a spin, it has proven to be incredibly accurate in predicting the Customer Effort Score that a customer would have given on a post-interaction survey, but (of course) without having to ask the customer to fill out a survey.

Armed with a TEI score on their calls, companies can now track Effort levels in real time, immediately drilling into those high-effort interactions that are likely to create disloyalty and churn. Coupled with our Effort dashboard, leaders can quickly pinpoint, with tremendous precision, the biggest opportunities for improvement in the customer experience—whether it’s a change to a digital channel like the app or website, a product fix, a call handling process change or an opportunity to coach agents on new skills and behaviors.

Set to launch in the July timeframe, Tethr customers will start seeing a Tethr Effort Index score assigned to every call that they process through our platform—and, best of all, this isn’t something we’re reserving for our largest customers only. Customers on our new Tethr Essentials package will see the same TEI score on every call they process as well.

Interested in understanding more about the Tethr Effort Index and how you can gain a better understanding what’s causing high effort experiences for your customers? Contact us and we’d be happy to walk you through it in more detail.

The Effortless Experience team at CEB that I mentioned earlier is now part of a standalone company named Challenger. This team is a great resource for anyone looking to learn more about the Effortless Experience research—and how to develop effort-reduction skills at the frontline. They work with organizations around the world on this sort of stuff every day and have a wealth of experience and insight to share.

Share