Data privacy in AI-powered contact centers: Charting the course in the GenAI era

Adam Larsen

January 26, 2024

Integrating Generative AI and large language models in contact centers marks a transformative leap forward in operational efficiency and customer experience enhancement. These advanced technologies offer unprecedented opportunities for streamlining workflows, personalizing customer interactions, and augmenting the capabilities of human agents. 

However, alongside these exciting possibilities, there are significant risks that need careful consideration. Before implementing any AI or machine learning (ML) technology, contact center leaders must navigate the challenges of biased or inaccurate information and safeguard their customers’ data.

At Tethr, trust and data safety form the bedrock of our machine learning and AI model development. Our approach is rooted in co-creation with opt-in customers, fostering a collaborative and transparent model-building process. Through rigorous testing, we ensure the fitness of data and uphold the highest standards of data trust and safety; this is particularly important with the recent developments in Generative AI applications in conversation intelligence platforms.

The following sections explore the key data privacy considerations for contact center leaders and illustrate how Tethr proactively addresses these concerns.

3 potential risks with generative AI models

From summarizing conversations to coaching agents, generative AI applications can reduce operational costs and augment human skills. However, contact center leaders must consider the potential data security risks–and how to address them. 

Harmful biases

There’s a risk of AI models learning from implicit biases in their training data. Biases can affect the outputs of any AI/ML model but can be particularly problematic in customer-facing generative AI models. Imagine, for instance, a virtual assistant in a banking app varying its recommendations based on prohibited factors like race or gender. The financial services company responsible for the virtual assistant would face significant legal and ethical issues.

Ensuring that a generative AI model’s training data is as diverse and representative as possible can reduce the risk of bias. Employees should also review AI-generated content, especially for sensitive or high-stakes customer interactions.

Inaccurate information

Generative AI, through its statistical models, is adept at crafting novel content by predicting the most logical continuation in any given data sequence. However, this very capability also makes it susceptible to "hallucinations," or the production of factually incorrect information. It's a delicate balance; enhancing AI's creative potential often increases the risk of inaccuracies, while mitigating them can constrain its ability to generate innovative content. This dichotomy is a crucial consideration, particularly in contact centers where the accuracy of information is paramount and the consequences of errors can be severe.

Imagine a frontline employee unwittingly citing misleading details from an AI-curated knowledge base article, or a chatbot erroneously promoting a nonexistent product offering–or selling a new vehicle for $1. Such scenarios are detrimental in any business context, but they can lead to particularly severe repercussions in regulated sectors, including substantial fines, legal repercussions, and irreversible reputational damage.

Mitigating these risks requires intertwining AI content generation with robust grounding and referencing strategies. By anchoring AI responses to reputable source material and integrating real-time updates, fact-checking protocols, and transparent sourcing, businesses can significantly bolster the accuracy and reliability of AI-generated content. While these measures may not completely eliminate the risk of hallucinations, they represent a diligent approach to managing and deploying AI responsibly. Just as with biases, it remains essential for individuals to review AI-generated content critically, ensuring its alignment with factual and current information. 

Data leakage

The evolution of AI from traditional machine learning models, which primarily generated numerical results, to advanced technologies capable of producing texts, images, and videos, signifies a pivotal shift in data security dynamics.

A case in point is the recent lawsuit by The New York Times against OpenAI and Microsoft. This action underscores a critical vulnerability: the possibility of GenAI replicating verbatim content from its training data with the right prompts. Another example is in the release of OpenAI's GPT models, where specific prompting can extract everything from the defined functions, the documents used for internal references, and even the instructions used to orchestrate the GPT. 

These recent trends highlight the need for businesses to extend their focus beyond addressing AI biases to protecting against the accidental release of sensitive data. They emphasize the need for ongoing enhancements to security protocols and investment in tools to ensure training data is thoroughly sanitized and devoid of sensitive information. Such vigilance is critical to match the pace of advancing generative AI technologies.

How Tethr approaches data privacy and safety 

At Tethr, our comprehensive approach to AI development is founded on four key strategies, each playing a crucial role in ensuring the fairness and effectiveness of our AI models.

1. Fairness through unawareness

This strategy deliberately excludes sensitive demographic attributes such as race, ethnicity, and gender from our AI models. By not using these attributes, we aim to prevent the AI from making decisions influenced by these factors, thereby reducing explicit biases. However, this method alone isn't enough to eliminate all forms of bias, especially those that might be indirectly present in the data.

2. Synthetic data 

This approach enables us to generate diverse and realistic scenarios, significantly enriching our training datasets with experiences not typically found in existing data. Importantly, we use generative AI for both the development and meticulous validation of this synthetic data, ensuring its relevance and accuracy. This strategy, reflective of the latest developments in AI, is proving highly effective and aligns with our commitment to adopting ethically responsible, representative, and responsive AI solutions catering to a broad spectrum of customer interactions.

3. Algorithmic auditing

While we believe in the value of the first two strategies, we recognize every strategy has limitations and can introduce its own bias, so we also conduct rigorous algorithmic auditing. This process systematically reviews and analyzes our AI models to identify hidden biases. We conduct regular audits not just at the initial stages but throughout the lifecycle of the AI model; this continuous scrutiny helps us safeguard against biases emerging as data evolves.

4. Continuous feedback loops 

The final pillar of our approach is the implementation of continuous feedback loops. This involves engaging with our customers in building and updating our models; we work directly with select customers from diverse industries to get feedback into the AI training process. This feedback is invaluable for understanding our models' real-world impact and making necessary adjustments. It ensures that our AI solutions remain relevant, effective, and aligned with the needs and values of our users.

By integrating these four strategies–'Fairness through unawareness', synthetic data, algorithmic auditing, and continuous feedback loops–we believe our AI solutions are ethically sound and uphold the highest standards of fairness.

Final takeaways

As Generative AI and large language models become a key part of the customer service tech stack, it’s up to contact center leaders to marry innovation with rigorous data privacy and safety practices. It’s a delicate balance, but it’s essential to ensure efficiency without compromising customers’ data or trust. 

At Tethr, we’re collaborating with contact center and CX leaders who are not just keeping pace with technology but actively shaping the future. These forward-thinking organizations are committed to pioneering new standards in AI ethics and innovation while keeping the customer experience centered. Together, we’re charting a course toward a transformative era in contact center operations.

Jump to:

Most popular articles