Quality assurance (QA) is not a new concept. It’s a decades old process that spans many industries. Dictionary.com defines it as: “a system for ensuring a desired level of quality in the development, production, or delivery of products and services” and the first example it uses is for nursing homes!
When applied to the realm of customer service, the focus naturally turns towards people: the agents who control the phone, chat, or email experience with customers. But for some reason, most discussion regarding QA improvements within the contact center – perhaps due to a historical (over) focus on using QA to wring out cost – seems to have one dominant theme: automation.
The thinking goes, if only I can automate the entire process – pick the perfect tech investment and robot-ize as much as possible – all of my quality problems will magically disappear.
Automated into an afterthought
With this push towards automation, people – both the frontline teams and those individuals tasked with monitoring calls – feel like an afterthought rather than a primary focus. In fact, the move to automate in the contact center often is rather explicit in minimizing the human element – using technology to crack down on script adherence, prioritize handle time over first call (actual) resolution, or even reduce entirely situations requiring a live agent.
This last piece is perhaps to be expected as we all experiment with more and more self-service tooling and other forms of asynchronous messaging that widens agent span and offers customers more interaction options. But that shift will take time and there’s no evidence yet that all customers actually prefer self-service over speaking to an agent. Actually there is some evidence to the contrary, at least with respect to when dealing with more complex subjects or during times of heightened anxiety.
Moreover, even those organizations viewed as best at QA – even those most successful at QA automation – still involve a lot of people in the process. Whether supervisors tasked with creating or tuning scorecards, managers asked to drive QA improvement amongst their team, or more obviously those individuals actually speaking to customers.
Change at the frontline of QA is still a largely human endeavor
For the foreseeable future, fundamentally, change at the frontline is still a largely human endeavor. Yet the march towards automation can often – intentionally or not – create or contribute to poor employee experiences.
Consider a QA scorecard with a heavy emphasis on sticking to the script, saying and doing the things the company has decided are important (e.g. saying the customer’s name three times, thanking the customer for her loyalty, etc.). Or even force feeding script suggestions in real time via screen pops. No surprise that agents often report feeling like factory floor workers.
Or let’s put ourselves in the shoes of a QA manager, knee deep in appeals coming from agents complaining about cherry-picked results based on non-representative data sets. Those agents view the system as overly penal and who do they blame? You. Which makes it hard to feel great about the everyday work.
Bridging the gap between man and machine in QA
The fix here needn’t be overly complex. A basic goal can be – and I’m by far the first one to suggest – do we treat our team as well as we aspire to treat our customers? After all, as the old maxim reminds us, how you treat your employees is often reflected in how your customers are treated. But getting into specifics on how to evaluate a QA program along these lines can get a bit fuzzier.
One potential inspiration comes from the world of sales. There’s an overly simple but effective frame for evaluating the quality of a sales pitch: how much of the message was about us vs them? The “them” here is the customer. The reason it’s an effective frame is because far too many sales pitches focus far too much on us: your own products, services, and company history.
How much of our program is about us vs them?
This same frame can easily apply to the realm of QA: how much of our program, its measures, and according processes are about us vs them (with “them” here meaning our teams or customers):
Want to learn more about how Tethr helps companies drive effective QA? Click here for a demo.