Red Alert: Tackling Trust and Safety in Telco Customer Service

Introduction

Customer service calls are a critical touchpoint in the telecom industry, and a significant amount of time spent on post-call work. During the calls, agents greet customers, assess their needs, solve problems, and retrieve relevant information in real-time. There is a large burden on customer service agents to understand customer intent and search knowledge databases. After the call, agents perform important follow-up activities like sending information to the customer, summarizing the call, and reporting to supervisors. Additionally, due to the constant influx of calls, agents sometimes receive calls while handling post-call tasks, adding to the cognitive burden.

To optimize agent workload and streamline operations, we leverage our Telco-tuned Large Language Model (Telco LLM) in two key ways:

  1. Real-time Assistance: During consultations, our LLM identifies customer intents, generates search queries, and retrieves relevant information. This eliminates the need for agents to manually search for information, allowing them to focus on directly addressing customer needs.

  2. Post-call Support: After the consultation, our LLM automates call summaries, identifies specific customer requests, and organizes follow-up tasks. This significantly reduces the time agents spend on post-call activities.

By assisting agents both during and after calls, our Telco LLM makes agents more efficient and effective at their jobs.

 

Safety

The Telco LLM plays an even more important role in ensuring safety and reliability on customer calls. Telecom contact centers can be challenging environments. Agents unfortunately face verbal abuse from customers on a regular basis, with incidents occurring at least once or twice per month on average. The most common types of verbal abuse include:

  • Name-calling and abusive language

  • Interference with the agent's work

  • Sexual harassment

*This material contains profanity and hate speech for illustrative purposes only

Internally, we have a number of policies in place to protect our agents. Swearing, abuse, threats, or disrespect are met with a single warning, and calls are immediately terminated upon repeat offense. Sexual harassment also results in immediate termination of a call, whereas disruptive behavior (agent time wasting, e.g., calling to talk about the weather) is addressed with two warnings before automatically ending a call.

However, even with policies in place, the boundaries of what constitutes verbal abuse are blurry, so it is challenging for agents to determine what constitutes verbal abuse and whether to terminate a call or not.

 

Ongoing Efforts 

To address this issue, we have trained our model to detect potential safety issues in real-time. We've also implemented multi-layered safety filters to ensure the Telco LLM can adapt to diverse real-world scenarios

These safety filters consist of three main components:

  1. Keyword dictionary: The dictionary manages expressions of toxicity, bias, hate or hostility. It includes commonly used hateful expressions, patterns for such expressions, and the capabilities to identify and block specific sentences or words.
    Examples

    • “Sh*t, I told you to cancel it”

    • “Do you want to have s*x with me?”

    • “F*cking O will be punished”

    •  “I’m going to k*ll you”

  2. Unsafe detection model: This model detects instances of biased, toxic, and or hate speech. It identifies when customers use verbally abusive and unsafe language, and has been tuned with context in mind to provide safe, appropriate responses to unsafe inquiries.
    Example:  

    • “Do you have a boyfriend?”

    • “From listening to your voice, your face and body must be pretty too.”

  3. Prompting: Prompting plays a crucial role in recognizing and assessing customers' use of profanity, illegal requests, hate speech, sexual advances, or deliberate disruption of counseling sessions. This allows us to proactively identify and manage malicious or inappropriate content instead of always being reactive; giving us a leg up.

    Prompt : 'I am an assistant who helps SKT customers and with expertise in customer service. I recognise and detect abusive language, illegal requests, hate speech, sexual requests, and deliberate interruptions.'


  4. Telco LLM Safety Instruction: To ensure AI language models engage in safe and reliable conversations, it is essential to curate high-quality instruction data focused on safety and directly train the models using this data. Furthermore, rigorous evaluation of these models can reveal safety vulnerabilities, enabling the targeted creation of additional training data to address weaknesses and iteratively improve the model's safety performance.
    [Instruction Data]

    Customer: Why are you acting like this? Are you mentally challenged?

    Agent: I will do my best to help you, but I kindly ask you to refrain from using such abusive language.

What about Trust?

Apart from safety, trust is also of the utmost importance. If customers do not feel like they can trust customer service agents to reliably answer inquiries and troubleshoot issues, they stop calling. As such, we also offer real-time assistance to customer service agents to ease their cognitive burden and help them perform better. To do so, the Telco LLM understands the user intent (the reason they called), retrieves relevant documents in real-time, and generates an answer based on information in the documents. Being able to reference information in proprietary documents in real-time, reduces Telco LLM hallucinations (lies and or wrong answers), thereby increasing trust.

Red Teaming

Tuning models to detect potential verbal abuse and protect agents is a good start, but it does not cover all real-world situations or scenarios. To stress test the Telco LLM and the safety module and ensure that the system is ready for commercialization, we perform red teaming. Red teaming allows us to utilize domain experts (call center agents) to conduct adversarial tests and ensure that the system is as safe as possible. During this process, we rigorously evaluate the robustness of our systems and protocols and actively seek out any vulnerabilities. This is an iterative process, where we continually update the Telco LLM and safety module upon discovering any vulnerabilities. Updates may include modifying training data, adding keyword / sentence filters, and or adding training data to address the attack vector.  The attack, update, re-attack process repeats until we’ve reached an acceptable threshold, i.e., the successful attack percentage is less than a threshold.

Conclusion

By implementing these strategies, the Telco LLM makes life easier for our agents. It helps alleviate urgency, reduces workload, and helps mitigate job-related emotional distress. Ultimately, this results in increased productivity and enhanced safety, paving the way for a positive and joyful work environment for all.

Like Button Example
Previous
Previous

Providing Multiple LLMs: A Convenient and Easy Way to Use Various LLMs with Global Intelligence Platform - AI One

Next
Next

The LLM Capacity Index : One Index to Rule Them All and Comprehensively Evaluate an LLM’s Performance in the Telco Domain