Tethr and Awaken Intelligence join forces as Creovai
Tethr and Awaken Intelligence are becoming Creovai, bringing together best-in-class conversation analytics and real-time agent assistance.
Robert Beasley
June 3, 2024
Sara Yonker
November 7, 2022
Your customer support agents probably start every live chat with a simple question. “Who am I talking to?” But sometimes, the question should not be who, but what.
Live chat and chatbot customer support has exploded in popularity – and with that, it’s become a target for fraud and identity theft.
Last year, more than 50,000 people were victims of identity theft and another 50,000 victims of personal data breaches, according to the FBI.
The newest tool they use? Chat support.
Just like customer support teams and developers create chatbots to address common customer issues, cybercriminals create chatbots. The difference is the motive.
Introducing the fraudbot – a chatbot that impersonates customers and tries to open fake accounts, reopen closed accounts, or access real customer data. Not only do fraudbots pose a security risk to your customer information, but they waste valuable agent time.
It’s not an insignificant amount of time, either. In one case, Tethr ran an analysis for a global consumer technology retailer and searched for active fraudbots using artificial intelligence.
Just one active fraudbot attacked 83,000 times in a 2-week period, resulting in 1 million minutes of chat sessions, equivalent to $250,000 of wasted agent time.
To protect your company from fraudbots, you have to understand what they are and how they’re created.
A cybercriminal only needs to have a few conversations with your customer support chat to prepare. Armed with basic information around your customer service agent scripts, they can develop responses, add a few variations, and begin deploying a bot to attack your chat system.
They can then sync the bot with a database of personal information. Using email addresses and phone numbers obtained through data breaches,they can impersonate those people to other companies.
If they gain access to those accounts, they then tap into more information: payment methods, mailing addresses, and other personal identifying information.
Often, customers talking to chatbots are aware they aren’t chatting with a live support agent. In the case of fraudbots, it’s not obvious.
Fraudbots mimic humans, even expressing frustration and emotions.
Without the use of artificial intelligence and an advanced conversational intelligence platform, detected fraudbots is impossible. That’s because there’s nothing in these chat transcripts that would raise a red flag.
You need to digest every conversation and look for patterns statistically unlikely to repeat at scale – something that you need computational linguistics to do.
Fraudbots want access to personal information and payment methods. In that respect, your customers are the real target.
Any company that offers customer support chat can become a target for fraud, especially as the bots grow more common and advanced.
Want to learn more about how fraudbots operate, how they’re developed, and what you can do to protect against them? Download our eBook “The Rise of Fraudbots” and you’ll learn: