Most CX and customer service leaders I talk to believe quality assurance (QA) in their organizations is a broken process in need of “modernization.” A CEB (now, Gartner) survey a few years back showed that a mere 12 percent of service leaders were confident that their QA process delivered tangible business results.
But, in these conversations, I often find that these leaders lament really superficial problems with QA—that it’s a manual process that requires loads of expensive labor to assess a very small percentage of agent interactions—instead of the real root cause of QA’s problems: that the criteria agents are assessed against don’t actually tie to business outcomes and, ultimately, the process itself delivers no tangible benefit to the company or its customers.
Are you automating a broken system?
Every QA organization I’ve ever seen is stuck in a vicious cycle of trying to guess at the criteria that will make a difference to business performance. NPS flatlining? Maybe we should require that reps thank customers for their loyalty at the beginning of every call. CSAT on the decline? Maybe it’s because reps aren’t showing enough empathy or professionalism. And when those metrics don’t move, QA teams tweak their criteria and give it another go. It’s like trying to guess the combination to a padlock.
It’s for this reason that I always advise service and CX leaders to be skeptical of any technology vendor that tells you they can “automate” their quality assurance process. Automation is all well and good, but if all the vendor is doing is mechanizing what you already do (and if what you already do doesn’t actually deliver tangible business results) then all you’re doing is getting worse more efficiently. You’ve just automated your own underperformance.
To be clear, the idea of quality assurance is a good one. There’s nothing inherently wrong with the notion that we should track what our reps say and do so that we can help them develop the skills, behaviors, and competencies necessary to deliver great customer experiences and business outcomes. The problem is that companies have no idea what those things are.
Reassess your QA function with real quality in mind
Recently, our team at Tethr sought to develop a model for assessing agents on the things that actually drive the outcomes companies to care about: retention, share of wallet, and advocacy. What we’ve come up with, we believe, is the world’s first algorithm that scores agents on the things that actually matter. We call it the Agent Impact Score (AIS).
To build the AIS, we started with our proprietary Tethr Effort Index (TEI), which is the market’s first and most comprehensive machine-based effort measure (built off of more than 250 variables as well as combinations and sequences of variables) and we stripped out everything not related to agents. As a broad measure of effort, TEI scoops up insights about things that happened before the call (e.g., attempts at self-service in digital channels) as well as product, brand, pricing and sales-related effort. Many of those things are beyond the purview of the agent and it’s hard (and unfair) to hold agents responsible for things they can’t control.
So, AIS is focused only on the things agents have direct control over–the 160 variables that capture both the behaviors agents demonstrate (e.g., advocacy, acknowledgment, positive and negative language, etc.) as well as actions they take on calls (e.g., silence time, overtalk, escalations, long holds, transfers, etc.). Like TEI, AIS appears on every call that flows through Tethr and it works on a simple 10-point scale that makes it easy for leaders and supervisors to compare performance across different contact centers, teams, and agents. And, importantly, AIS only measures what is in the agent’s control, which means that sometimes we see “bad” calls (i.e., calls below a 4.0 on the TEI scale) that still have good AIS scores–because the agent did things that helped turn what could have been an abysmal call into one that was merely not great. And vice-versa.
Metrics that matter to your business
The greatest benefit of AIS though is that it’s directly tied to the business outcomes that matter. TEI captures the level of effort in the interaction, which our research in The Effortless Experience as well as follow-on validation we’ve done here at Tethr shows is directly tied to the loyalty outcomes companies care about most. Put differently, customers who have “difficult” interactions (anything below a 4.0 on the 10-point TEI scales) are much more likely to churn out, much less likely to spend money with a company, and far more likely to spread negative word of mouth than those customers at the other end of the spectrum (the “easy” interactions or those above a 7.0 on the TEI scale).
So, we know that TEI ties directly to the business objectives companies have: to acquire, deepen and retain their relationships with customers. AIS then measures the subset of levers within TEI that are in the agent’s control. Unlike traditional QA scorecards and checklists then, AIS measures specifically those things that have been proven to move the needle on the business.
Overhaul traditional call center QA with the Agent Impact Score
When we deploy AIS against real customer and agent interactions, we learn some fascinating things about which behaviors and actions matter and which don’t. To our surprise, some of the things companies have long assumed are good to do to improve quality actually don’t move the needle…and, even more surprisingly, some of the things we’ve long assumed to be detrimental to quality are actually positive drivers of the customer experience. Learn more about the drivers of good and bad AIS here.
Check out our on-demand webinar on aligning agent performance to business outcomes where we break down what the Agent Impact Score can do for your business. Get a breakdown on how one of our customers uses the Agent Impact Score for quality assurance here.