Even though it has been a month since the Customer Response Summit, one comment by Chase Auto’s Chief Marketing and Customer Experience Officer, Renee Horne, really stuck with me.
In her keynote, Renee mentioned how Chase Auto takes employee feedback seriously because their thoughts often serve as an early indicator of customer responses.
This makes a lot of sense, especially in the customer service space. Customer service agents are the ones interacting with customers directly and frequently. They know the customers intimately – so their thoughts are usually a good reflection of what reactions we could expect out of customers.
But I want to stretch this a bit further.
If we can imagine agent feedback – perhaps either explicit or implicit – could serve as an indication of customer experience, I wonder what it means when 60% of the agents plan to quit in the next 6 months?
The contact center agent churn problem
To start, continuing agent churn means a steady state of inexperienced agents. These agents, despite the best of their intentions, don’t yet have the capability to handle customer queries as effectively and efficiently as a veteran agent would.
We actually conducted our own research on this matter. What we found is that handle time decreases as agent tenure increases. It is not difficult to see that as a new group of agents joins, the duration of the interaction increases, the resolution rate drops, and customers get passed onto another agent – oftentimes having to explain yet again why they are calling. What we found was also supported by earlier 3rd party data.
This is not a good customer experience.
On the business side, agent churn is a costly problem – not just the cost of the hiring process, the productivity lost when people are being pulled into interviews, the constant need to train new agents, and also, the impact on customers’ experience – and the cycle continues. Altogether, this amounts to $20,800 every time an agent quits – money that could have been invested in other CX projects.
Now, there have been a lot of fixes proposed to tackle agent attrition. Workforce engagement management, flexible work arrangements, outsourcing, you name it. There is also technology put in place, like chatbots or agent copilots, to bridge the gap resulting from agent churn.
Some perhaps offer small or incremental effects, but nothing really tackles the root of the problem that a lot of us don’t want to admit – the job itself doesn’t come with high pay, decision-making power, or great advancement opportunities. If anything, an obsession with performance metrics and coaching leads to even more agents leaving.
Most of us just treat this as part of the cost of running a business.
Living with agent churn while customer expectation continues to rise is not a sustainable situation.
This calls for a new way of looking at the agent churn problem differently. What if, instead of treating only the symptoms of the agent churn problem, we bypass the issue instead?
It probably isn’t a surprise that I am going to bring up generative AI. After all, it seems everything under the sun is now about generative AI. But not just any generative AI – a customer-facing generative AI agent that can handle tasks autonomously, and with human supervision when resolution calls for it.
But, before we dive into how AI agents can address problems brought about by agent churn, we need to unpack AI applications in the contact center a bit, so we can narrow it down to what really helps.
This AI is not that AI
There is a broad category of generative AI CX applications that are agent-facing. They fall primarily into these three buckets, all helping agents to be more productive and effective.
- live agent assist: suggesting actions, cuing up relevant knowledge base articles, or composing messages for agents.
- after call work: generating call summaries, sending follow up
- back office automation: creating tasks for other departments in internal systems
Then there are also chatbots that incorporate generative AI in some ways to improve the way they communicate, and enable them to serve up information more effectively. These chatbots don’t take actions, which means they can’t do much to resolve customer issues directly.
There are also generative AI agents that are not customer-facing. These solutions can autonomously take actions to support human agents and other employees within the organization. These agents are connected to the knowledge base, internal systems such as a CRM, pre-established policies, or more. Such AI agents could be deployed internally to further assist your human agents, so human agents can more effectively address customer queries.
All these still require human agents to interface with the customers and take actions needed by the customers, even simple tasks. Ultimately, these applications depend largely on your human agents, and can’t address the issues that stem from agent churn.
Then, there are customer-facing generative AI agents. Instead of still depending on human agents to interface with customers, these AI agents can resolve customer requests that do not require a veteran agent’s assistance directly. Some examples include changing a reservation, inquiring on a billing mistake, or issuing a refund. Just like human agents, these AI agents should be able to:
- Listen to the customer
- Understand their needs
- Propose helpful solutions
- Take action to resolve the issue
Because of their unique ability to handle customer tasks autonomously and immediately, customer-facing AI agents are in the best position to allow contact centers to bypass the agent churn problem.
Scalability with human-in-the-loop
Sometimes the AI agent encounters something that requires a human supervisor’s involvement before going forward. For example, you might want a human supervisor to authorize a bill-pay extension, a refund, or an upgrade. At this point, “human in the loop” is typically discussed by vendors. This terminology is a bit tricky – because not all vendors mean the same thing when it comes to human-in-the-loop.
In many cases, having a human in the loop means a human agent monitors everything the AI agent does. In other cases, it means serving up a short clip of the customer-agent interaction to the human agent, or having human involvement in training the AI agent. And finally, some simply refer to human-in-the-loop as handing the customer inquiry to a human agent.
None of these actually take advantage of the scalable nature of the technology to give you the best return on your investment. At worst, it recreates the barrier for customers to get what they want – resolution.
According to the research by Ujet.cx surveying 1000 consumers in their Exceeding US Customer Expectations research report, when they were asked “What’s most important to you when contacting an organization,” 61% of survey respondents wanted issues solved the first time when they were asked.
The human in the loop that will make a difference is one that can scale. For example, instead of passing the customer to a human agent to resolve, the AI agent can connect with a human supervisor for input so it can resolve the customer issue directly, while maintaining control of the customer interaction. It therefore creates a single, streamlined experience for the customer. Additionally, because the generative AI agent can handle multiple customer interactions simultaneously, even through voice, and because the human agent is only needed for occasional guidance or authorization, concurrency remains high.
This concurrency then also creates the scalability – that a human supervisor can now monitor multiple AI agents – just like how they supervise multiple human agents.
Proceed – with caution
The right generative AI agent can have a dramatically positive impact on your contact center. That said, it is good to have a healthy dose of skepticism.
A key consideration is AI safety. These are unique considerations that arise mostly due to the nature of generative AI. One of the major topics within AI safety is to tackle hallucinations – outputs that are not grounded in the input data or the knowledge base the AI is supposed to rely on. AI hallucination is an intrinsic part of generative AI – so be wary of vendors who claim they can eliminate AI hallucination.
Because of the risk associated with hallucination (e.g., giving customer the wrong information), protocols need to be put in place to prevent, detect, and address hallucination. Human-in-the-loop offers an additional layer of protection as well.
A related consideration is the long-term maintenance and upkeep if the business is planning to build its own AI agent. The advancement of language models (LLMs) have made it super easy to set up a working prototype on a laptop. The heavy lift actually comes after prototype development – continuing monitoring, evaluation, and refining the system demand resources. For example, how do you know AI agents are answering customer questions correctly? How often does it give a bad response? What is the cause of the bad response, and what can be done to correct it? What’s the impact when it scales? Even with internal expertise, the resources involved in the future could be quite significant if the business is not working with a vendor/partner.
All this to say that there are indeed strings attached, and caution needs to be taken, especially since the AI agent will interact directly with customers. But the good news is that these risks can all be managed through various strategies, such as input safety to prevent jailbreaking, output safety to prevent sharing not just hallucinations but biased or offensive language, and more. It’s important to work with vendors who can share clearly how they plan to help you manage these risks.
At the end of the day, AI applications should improve customer experience, not cause more problems.
According to the What Happens After a Bad Experience report by Qualtrics XM institute surveying 28,000 consumers, more than one-third of consumers reduce or stop spending after a poor experience with an organization.
Where we are with AI actually reminds me a lot of a time when everything was on-premise, and nothing was in the cloud. Similar, there are risks involved here, but when managed well, the benefits outweigh the risks. Companies that learn to manage it well were able to scale quickly and effectively.
When the AI agent is well-managed and done right, it can tackle major business problems and dramatically improve customer experience. There is not yet a benchmark available in the market for customer-facing AI agent performance, but based on our customer’s data, the results can be quite dramatic.
Envision the future contact center
It is undeniable that implementing AI agents doesn’t exist in its own vacuum, and there will be impact on what a contact center will look like in the future. In a recent round table hosted by Execs in the Know with CX leaders, it was pretty clear that everyone knows transactional tasks will be almost fully handled by AI in the next 2-3 years.
With transactional tasks eliminated, the next question emerges, “what are we going to do with human agents?”
The reality is that AI agents will never replace all human agents – there will always be customer problems that require human finesse, understanding, and emotional intelligence. There will also always be opportunities for relationship-building with customers.
But, with the implementation of customer-facing AI agents, what we can now envision is a different contact center, where human agent work is more meaningful and valued. We can chart a brand new career path for human agents, in which agents are upskilled to work with AI and to help supervise, evaluate and operate agentic AI systems.
That is how generative AI will ultimately help us tackle the problems caused by agent churn.
Guest post, written by: Theresa Liao, Director of Content & Design, at ASAPP
To learn more about how generative AI agents can transform your customers’ experiences, go to https://www.asapp.com/. Or, listen to the replay of the Execs In The Know and ASAPP webinar, Generative AI Agents for CX: Separating Fact from Fiction, originally hosted on September 17, 2024.
ASAPP is an artificial intelligence cloud provider committed to solving how enterprises and their customers engage. Inspired by large, complex, and data-rich problems, ASAPP creates state-of-the-art AI technology that covers all facets of the contact center. Leading businesses rely on ASAPP’s AI Cloud applications and services to multiply agent productivity, operationalize real-time intelligence, and delight every customer.