Tethr Blog

The words that supercharge (or destroy) your customer experience

In my last post I wrote about some of the fascinating work going on at our company, Tethr, to define and expand on the concepts from our original research in The Effortless Experience.

When we did the research that went into the book, we relied on surveys (collecting data from a few hundred thousand customers about their recent service experiences) as well as hundreds of in-depth research interviews with service and CX leaders, their frontline managers and even frontline reps themselves.

It was time-consuming, tedious work that took many years to complete…but (aside from the obvious benefit of getting to work with so many talented people at CEB, now Gartner, and Challenger, Inc.) the payoff was worth the investment: in the end, we’d discovered a completely new way to think about the customer experience, backed by data and validated across hundreds of thousands of customer service interactions.

What we found was this: instead of trying to drive loyalty upward by delighting customers in service interactions, best companies focus on mitigating disloyalty by reducing the level of effort their customers had to put forth when doing business with them.

What’s more, relying on the data from our study, our team at CEB was able to isolate those companies who, in the eyes of customers, actually deliver a low-effort experience. Across the research, we identified four best practices utilized by low-effort companies:

  1. Channel Stickiness: Low-effort companies understand that customers today want to self-serve on their issues and they deliver a simple, intuitive, guided experience that keeps customers in self-service instead of forcing them to bounce out into the live service channel.
  2. Next Issue Avoidance: Low-effort companies don’t just focus on resolving the issue the customer contacts them about, they focus on forward-resolving the issues customers might call them back about.
  3. Experience Engineering: Low-effort companies know that customer effort is only partially a function of what the customer has to do to get an issue resolved—it’s much more a function of how the customer feels about what they have to do. These companies therefore focus more on teaching reps to use sophisticated language techniques rooted in human psychology and behavioral economics so that they can engineer a better experience with the customer.
  4. Frontline Control: Low-effort companies understand that today’s service reps are handling far more complex issues now that the “easy stuff” has gone to self-service. Unlike most companies who try to tighten their control of the frontline (through scripting, “screen pops,” etc.), low-effort companies know that you can’t script your way to victory in this new world. Instead of taking control from their frontline reps, they actually give more control to them, empowering reps to use their own judgment to resolve customer issues and deliver a low-effort customer experience.

As I discussed in my last post, our team at Tethr has now taken each of these concepts and explored them in far more depth than we ever could through post-transactional surveys or interviews. We did this by teaching our machine learning platform to “listen” for effort in customer conversations…and then we ran hundreds of millions of minutes through the platform to see how these concepts would show up in real conversations.

One of the things we realized immediately was that some of the concepts that we made sound pretty simple in the book, are actually incredibly complicated in real life.

Take experience engineering as an example. In the book, we identified a set of language techniques that progressive companies were teaching their reps to use and we tested the impact of those techniques on the level of effort in the customer experience. Of all the techniques we studied, the one with the greatest impact on the service experience was advocacy—this is when reps use language that sends the customer the message that you, as a rep, are on the customer’s side of the issue and are going to advocate for them to reach a positive resolution. When reps use this kind of language, we found that it could reduce customer effort by as much as 77%.

While this was exciting to prove that words matter—and matter a lot—it was still just a concept for most practitioners, one that was hard to identify in practice and, therefore, potentially hard to teach to reps, even harder to identify when it was being used correctly or not and pretty much impossible to measure the real impact on the customer experience.

When our team at CEB (now Gartner) linked up with the data science and speech analyst team at Tethr, however, we were able to take this idea from concept to reality.

The first thing to understand about how this works is to understand where the data comes from. Companies like Tethr take in conversational data (e.g., customer phone calls, chat interactions, etc.), transcribe it (turning the unstructured audio into unstructured text) and then bring structure to the data by shaping it. Of all of the steps in the process, it’s the shaping that delivers the most value because it’s what ultimately leads to insight creation—the moment you go from understanding what happened to really understanding why it happened and, more importantly, how to improve the experience next time around.

Data shaping is complicated and it’s a tango between artificial intelligence and human intelligence. Machines don’t (yet) think on their own, they need people to teach them and guide them. At Tethr, it’s our team of speech analytics experts that teach our clients how to shape their data in order to surface insight.

I spent time with one of our lead speech analytics team members, Amanda Luciano, and one of our data scientists, Jonathan Walker, to get a better sense for how this process unfolds.

Jonathan described it as a process much like teaching a child how to fetch a tennis ball. “At first, you ask the machine to bring you back a tennis ball…and it brings something back, but it’s a stick, not a tennis ball. You teach it that a stick isn’t a tennis ball…and then it comes back with a football. You teach it that tennis balls are round…and it comes back with a baseball. You teach it that tennis balls are yellow and fuzzy and it eventually comes back with the right thing. Think about how you or I teach Pandora what sort of music we like and don’t like by giving songs a thumbs-up or a thumbs-down. Over time, it will deliver a very accurate result…but it doesn’t start that way.”

Teaching a machine to understand a nuanced concept like advocacy takes time. The first step our speech analytics team takes is to just put pen to paper and think about what the concept might sound like. As Amanda explained, “We try to place ourselves in the rep and customer’s shoes and think through the way something like advocacy might be expressed.”

This sort of brainstorming leads to commonly understood utterances like:

  • “Let’s go ahead and do that for you”
  • “Let me see what I can do”
  • “I’m going to take care of this for you”

So, the team built a machine learning script that captured these common phrases—and when the results came back (like the tennis ball-fetching example), there were lots of spot-on hits…but plenty of false positives as well. There were also some things that were close enough that the team decided to expand the script to include them in the training set. For instance “Let me check (or look into) what I can do for you” represents the same idea as the original training set so they teach the machine to include these in the future.

Over and over they will test their scripts against larger and larger call sets. Our current advocacy category has been tested against more than 200 million minutes of customer conversations. Along the way, the team encountered many less common (but nonetheless important to include utterances) like:

  • “I won’t let you down”
  • “I’ve got another option for you”
  • “I can assure you”

If they are unsure about the specific usage, the team goes back to the original transcript or even call recording to listen to the interaction and gauge for themselves whether the utterance fits the concept they’re trying to capture or not. The latest iteration of our advocacy category encompasses 130 relevant phrases which are captured through 28 discrete machine learning scripts—each phrase itself can appear in multiple different usages, so the machine is capturing thousands of different advocacy-related utterances.

The obvious benefit of having a precisely tuned category like advocacy is that you can leverage the power of machine learning to identify, literally in seconds, how the concept appears across millions of customer conversations, identifying where advocacy was used and where it wasn’t—and to what effect.

With millions of conversations tagged for use of advocacy, our data scientists tested the technique itself against outcomes our customers care about, like recontact rates (a big driver of customer effort, not to mention cost) and sales conversion. With one large insurance company, we found that using advocacy language decreases the likelihood that a customer will recontact the company within seven days by roughly five percent—a huge reduction for an organization that handles millions of customer calls in a given year. And for one home services provider we work with, we found that advocacy had a massive lift on sales conversion: when reps used advocacy language, it increased sales conversion by more than 22%, which—to our surprise—was a far greater conversion impact than even asking for the sale.

This analysis helped us to prove out that all advocacy language is not created equal. For example, in a sales interaction, it’s much better to demonstrate “declarative advocacy” (e.g., “I have the perfect package for you”) but such a confident, declarative approach doesn’t work well in service calls because it sends the customer the message that there’s only one possible course of action…and if it doesn’t work, you might be out of luck. The better approach is for reps to demonstrate “flexible advocacy” (e.g., “I have a few ideas for how to fix this…let’s try this one first.”). The message to the customer is that there’s more than one way to solve the problem.

One of the things that we often find at Tethr is that negative hits can be as instructive as positive hits when testing a new category. In this case, we found a recurring theme when we audited the negative hits—something that ended up producing an entirely new concept that actually has more bearing in service interactions than even the original concept of advocacy. It’s advocacy’s evil twin: powerless to help.

“Powerless to help”—or, put differently, when reps hide behind policy—is really the opposite of demonstrating advocacy, and its effect on the customer experience is nothing short of disastrous. To give you a sense, for the insurance company I discussed earlier (the one which predicted a five percent reduction in the probability of a recontact when advocacy language is used), we found a six percent increase in the probability of recontact when powerless-to-help language is used. For another large insurer, we were able to link up customer call data with completed survey responses and saw that when reps used powerless-to-help language, it resulted in a 27 percent increase in the likelihood the customer would give the call a “high effort” score. And for a credit union we work with, we saw that powerless-to-help language drove a six percent decrease in the probability that a customer would give the credit union a high NPS score.

It turns out that in the English language, at least, there are a lot more ways to shirk responsibility than to take responsibility. Specifically, we identified 317 relevant phrases in total which we’ve coded into 26 different machine learning scripts. Some of the common ones you might have guessed yourself:

  • “There’s nothing I can do”
  • “That’s not an option”
  • “There’s no way for me to do that”

But we also turned up some less common phrases that show up, like:

  • “I have limitations”
  • “Those are just the rules we have”
  • “I don’t have that ability/power”

Of course, these concepts of advocacy and powerless to help are just two examples of a whole battery of categories that together fall under the heading of effort-reduction language techniques—or, what my co-authors and I called “experience engineering” in the book. Other examples include setting expectations, positive and negative language and acknowledgment. What’s more, even for these “completed” categories, we continue to evolve our understanding of how language can affect customer outcomes—so the categories themselves are very much living things. As my colleague, Amanda, explained to me, “Concepts like advocacy and powerless to help are never really complete as categories. Human language is incredibly complex, and we are constantly finding new, quirky ways in which an agent can present them on a call.”

In the next post, we’ll look at how we’ve used AI and machine learning to help companies crack one of the most intractable customer experience issues and one of the largest drivers of customer effort—repeat contacts—helping companies to execute on the concept of “next issue avoidance” we outlined in The Effortless Experience.

Contact us at Tethr for more information about how we’re translating effort-reduction concepts into human language using the power of machine learning.

The Effortless Experience team at CEB that I mentioned earlier is now part of a standalone company named Challenger. This team is a great resource for anyone looking to learn more about the Effortless Experience research—and how to develop effort-reduction skills at the frontline. They work with organizations around the world on this sort of stuff every day and have a wealth of experience and insight to share. Specifically, “advocacy” is part of their Effortless Experience™ Skills Framework, which teaches customer service reps how to deliver low-effort interactions.

Share