Every day, we have conversations with companies who are serious about working to reduce customer effort—just like you. Those companies know they’re hard to do business with, and they’re ready to take that first step to change that. And often, that first step looks like buying a conversation analytics tool or platform to help measure effort through unstructured data. (Woohoo!)
There’s just one problem: There are so many options for conversation analytics and conversational intelligence software out there—and not a lot that differentiates those options from each other to the untrained eye. That’s where we come in.
Today, we’re going to spend some time talking about common mistakes we see people make when looking to invest in a conversation analytics solution. Then, we’ll share what you should look for instead, in order to choose a conversation analytics tool that will actually help you reduce effort at your company.
Here are three pitfalls to avoid when choosing a conversation analytics tool:
Pitfall #1: Applying bad structure to unstructured data
The survey, by definition, is a limiting instrument. It’s a handful of questions that generate a simple score based on a limited sample size. This is fine, for what it is, but as customer experience leaders, we may struggle to understand what factors caused that score. Unstructured data, on the other hand, has the opposite problem—it’s everything. The unstructured data you can pull from calls is such a massive, massive data set, that looking for insights in that is like trying to boil the ocean. And many companies, when they go in to make that project smaller, lose some vital data in the process.
That’s the first pitfall we see in conversation analytics tools: Applying bad structure to unstructured data.
You can’t cut corners
Faced with the influx of information accessible with a conversation analytics tool, many people try to address the sea of unstructured data with surface-level work. We’ve seen people use keyword spotting, basic sentiment, and tone, and try to call that measuring Effort. This is actually really dangerous, since it may reduce the amount of data you have to sift through, but it also creates huge blind spots—like the role of the agent in an interaction versus a customer’s role. And if we’re missing these things, that means we’re not fully understanding our customers, and we could be making bad business decisions.
Structuring unstructured data: Tethr’s conversation analytics tool
Here at Tethr, we use research as a way to thread that needle. It can be difficult to understand all the facets of customer effort on your first day with a conversation analytics tool, so we don’t ask you to. Instead, we’ve codified our findings on effort into algorithms that automatically show you what’s important in your customer conversations. Unstructured data provides you with more information than you’ll ever be able to use—but don’t settle for the shortcuts. When you look for a conversation analytics tool, look for one that uses a powerful lens, backed by research, to sort and structure your unstructured data.
Pitfall #2: Automating the wrong Quality Assurance processes
The second pitfall we see folks run into a lot is automating the wrong QA processes. The whole field of speech analytics really grew out of a desire by companies to get out of the business of having people manually listen to calls—which we know is problematic for lots of reasons. It’s expensive, it’s labor-intensive, and letting a machine do it instead allows you to redeploy your QA staff to do other things like coaching, process improvements, and internal projects.
So, what’s the problem?
While this sounds great, it’s important to keep in mind that if you’re serious about effort reduction, your QA card will need some serious updating. This is such a dangerous mistake, too, because we see a lot of companies that approach practitioners and say, “Hey, we can strip out all this cost from your manual QA process and get you a 100% sample size, and just have a machine do it.” And that sounds really appealing. But a lot of those platforms fall short of utilizing the real power of automation by doing that.
Ask your QA automation vendor these questions
If you’re serious about reducing customer effort, you need to update your QA scorecard to reflect that and ask those questions about effort. You need to figure out what you should be measuring. We recommend, actually, that if you’re going to automate QA, that one of the first questions you ask your vendor be some of the below questions:
- What should I be measuring?
- What are the agent behaviors that matter?
- What are the things that actually increase effort?
Ask your vendor all these questions, and really try to understand their perspective, and what research led them to measure the things they do. Ask them, “What’s in your agent impact score that will actually lead to effort reduction? Because that’s ultimately what I’m getting paid for, and what my customers want. Show me that measuring this will make a difference.” These are the relevant questions that can make you a smart consumer when it comes to automating QA for the things that matter.
Pitfall #3: The real-time conversation analytics tool
Finally, we find that many buyers get caught up in the idea of real-time analytics. And, while we do have a specific point of view when it comes to real-time, it’s not necessarily a bad thing. That said, we do see it often poorly used by many vendors as a way to flash instructions or directions at a contact center rep while they’re on a call. We believe that underserves our bigger objective – allowing your reps to exercise their own judgment – which we know is critical in terms of creating a low-effort organization.
Real-time actually discourages high-performing behavior
We’re not just standing on a soapbox here. One of the interesting things we found in our own research around effort, is that when you look at the way high performers handle customer interactions, they often purposefully allow the customer to air out their frustration to get all that dirty laundry out of the table early on. Imagine a high performer letting the customer air their frustrations for a few minutes under this system—the red flashing lights would fill their screen, screaming that the customer’s upset, fix it. Even when that’s not what needs to happen right then.
Real-time isn’t bad, it’s just not about reducing effort
With all the above said, real-time analytics isn’t bad. But it’s just not the direction we encourage people who are interested in reducing customer effort to go. If you care about reducing customer effort, there are other processes and other forms of technology that can be more useful to you in achieving that goal. On a larger scale, it’s useful to always keep in mind when evaluating a new tool: How is this helping me reduce disloyalty? How is this working to reduce customer effort?
How to choose a conversation analytics tool…
These are just a few of the mistakes we see people make when they’re looking to invest in a conversation analytics tool. While we’re proud of the work we’ve done to make Tethr a state-of-the-art conversation intelligence platform, this isn’t a Tethr sales pitch. We’re just here to help you better understand the market, and become a better-educated buyer. In the end, you have to choose the platform that best fits your needs – and ideally gets you closest to the true voice of the customer (VoC).
We hope this helps you with any upcoming decisions regarding conversation analytics tools.
For more information on effort reduction and the latest in artificial intelligence as it pertains to the customer experience space, be sure to check out our recent podcast, Customer Effort: Through an AI Lens. To learn more about Tethr, you can check out our platform page or request a demo to see our solution in action.