Disruptive technologies have been fundamentally reshaping the way we work for decades. The development of bots and automated messaging has particularly transformed the world of call centers, enhancing the role of agents around the world. This is also true with the call center quality assurance function. But has the time come to reevaluate the usefulness of the call center QA function altogether? After all, there are several massive issues inherent with call center QA.
In the past, companies used manual call listening to better understand the customer experience. AI made this obsolete. Today, most companies are using some sort of speech analytics technology to handle QA, but is it enough?
A day in the life of a QA manager
Imagine a QA manager’s day looking something like this: an individual sitting at a desk and combing through a sample of calls to measure against their checklist. The manager then uses the criteria in the list to grade the representative in question’s customer experience score. Here are some points they are ticking off their boxes.
- Script adherence
- Proper greeting
- Required closing
- Compliance (proper authentication, retrieval of account number, disclosure statement)
- Skills & behaviors (professionalism, empathy, subject matter expertise, confidence, friendliness)
For QA managers in large call centers performing these evaluations manually, they can only listen to a few calls per rep each month. Not only is this an ineffective method with a disproportionate sample size, it’s a costly one.
Let’s not forget about inherent bias, a factor that always comes into play when dealing with human beings with human feelings. Biases make any data less actionable. Three different QA managers might have three different reactions to the same customer interaction. Biased reviews and performance scores are leading causes of poor morale and high call center attrition rates in call centers.
Because managers can only score a few calls per rep, managers receive pushback on unfavorable reviews (“Of course they listened to the only call that didn’t go well!”), which means that reps can opt to commence an appeal process to update their score. Often, reps are paid based on their ratings, so it is necessary for them to be able to appeal subjective scores. As you can probably guess, the appeals process is yet another expensive and time-consuming activity for the call center QA function.
What about businesses with automated call center QA processes?
While companies have reaped the benefits of efficiency improvements implementing speech analytics, have they really gotten any closer to closing the customer experience gap we see between a company’s brand promise and the actual experience delivered to their customers? Even in businesses that automate these QA processes, the focus has traditionally been on monitoring what the agent says: proper greeting, script adherence, proper closing and on optimizing efficiency by reducing average handle time.
Stop entertaining a broken system
The gist of this is that QA doesn’t drive quality in the slightest. While there are increasing claims that using machine-based learning to automate QA and rid of the need of manual call center scoring is the ultimate cure, it simply isn’t true. Even if an unbiased machine listens to 100 percent of your calls and can accurately grade them, the scorecard that reps are being measured against warps results. The items that are being quantified have no real impact as business drivers.
For example, let’s say a business decides that using a customer’s name at least four times during a call increases the likelihood of positive interaction. This criterion then gets added to a scorecard as part of the evaluated criteria. However, at the end of the quarter, managers decide that this tactic doesn’t positively influence customer measurement efforts. They abandon the concept completely, likely without telling the reps the news. The reps never learn of the changes and miss out on points because of the ever-changing system.
But the things being tested aren’t of any true value anyway.
Using a customer’s name, following a standard greeting, and inserting your organization’s name twice during a call does nothing to drive quality. It doesn’t decrease churn or surge sales conversions. It isn’t boosting your Customer Effort Score (CES), Net Promoter Score (NPS) or Customer Satisfaction Score (CSAT) because in order for those things to be impacted, quantifiable improvements would need to be reached.
Reevaluate your call center QA function ASAP
Rather than beating a dead horse, start improving the customer experience by driving true quality. Tethr is an enterprise listening platform that offers The Tethr Effort Index (TEI)—the market’s first machine learning-based, predictive effort score. TEI allows organizations to track customer effort at a conversation-level, and immediately dig into those high-effort interactions that create disloyalty and churn. The TEI score is immediately assigned by the Tethr platform at the completion of each customer interaction so that managers can instantly understand the level of effort of interactions.
Tethr was designed to help companies close the experience gap. Rather than use speech analytics to automate what your agents are saying to customers, use Tethr to understand not only agent behavior but what your customers are trying to tell you. Your customers are already telling you everything you need to know about their experience with your product, their store/website experience, about the rep that sold them your product, about your brand message and about your pricing, all of which can be used to prioritize and improve CX.
Lead the call center QA function revolution with machine-based learning. Disproportionate sample sizes will become a thing of the past and calls for all representatives will be evaluated without inherent human biases, making costly appeals obsolete at last. Reposition the QA managers who were previously grading calls manually and transition them to a more valuable role: bona fide coaches for your reps. By liberating the data and insights out of the call center, call center buyers can progress beyond their traditional place as a cost center and earn a seat at the leadership table.
Most important of all, TEI was built to measure what drives effort—customers who experience low-effort service are most likely to remain loyal customers. When you combine Tethr’s ability to improve the effectiveness of how agents engage with customers with the ability to reach beyond the call center, a powerful CX story is born.
Are you curious about what’s causing high-effort experiences for your customers? Request a demo to see TEI and the Tethr platform in action.