TechSee

The 5 Challenges LLMs Pose to Service Leaders

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on google
LLMs for Service

Service leaders are increasingly looking towards AI to stay ahead, or even just to stay afloat. Language Models (LLMs) have emerged as a powerful tool with the potential to revolutionize customer service, automating responses, personalizing experiences, and enhancing operational efficiency. However, the implementation of LLMs also presents its fair share of challenges. In this blog post, we will explore the five key obstacles service leaders face when integrating LLMs and offer practical advice on overcoming them.

1. Maintaining Brand Safety and Consistency

Simply put, you cannot tell an LLM precisely what to do as you would a service bot. Instead, you can train, prompt, and fine-tune an LLM to direct the AI toward the correct answer.

Remember that LLMs are generalists. It is up to you to make them specialists in your brand voice and products. To train the LLM, you must upload various knowledge assets (e.g., product documentation) and prompt the LLM to focus its answers on the information in the right documents. This is typically part of the initial LLM training and fine-tuning process. (For more on IP risk management, see part 4 below.)

After training your LLM, start your deployments with an internal audience. This will allow you to thoroughly validate the LLMs’ efficacy, safety, and consistency before exposing users to an unproven technology.

It is critical to invest time and resources in training them to understand your product, services, and procedures. Regularly review the LLM’s responses and set up to ensure the AI delivers as designed.

2. Navigating Complex Customer Interactions

When correctly set up, LLMs can excel at handling routine inquiries. However, they may need help with complex customer interactions that require visuals, empathy and emotional intelligence. Service leaders must strike a balance between automation and human intervention. Implementing a seamless escalation process, where LLMs seamlessly transfer customers to human agents, ensures personalized support is delivered when necessary.

3. Managing LLM Overhead – Tokens Don’t Grow on Trees

Today’s LLMs are typically priced based on token usage, and tokens are not cheap.

Feeding the LLM training information? Sending customer questions to the LLM? Receiving responses from the LLM? Providing feedback to the LLM? Every one of these interactions consumes tokens – the more information you send or receive, the more tokens you will use. Without proper management, LLM deployments for service quickly become prohibitively expensive.

Now let’s explore how to manage these costs better. When starting out, many service teams try to plan or estimate the number of tokens required for each customer interaction. Take these initial estimates with a grain of salt. You will generate more realistic cost models as you gain experience.

Another common technique is to optimize the LLM’s response length by setting appropriate parameters and truncating unnecessary information. This technique is essential when adapting the LLM to the service channel. For example, responses in chat are often far shorter than responses via email.

One of the most popular techniques to minimize token usage is vector embeddings. Vector embedding is an AI technique that directs the LLM to answer a user query based on specific chunks of knowledge rather than asking the LLM to reference the entire knowledge set. This more technical approach is gaining popularity as enterprises gain experience working with LLMs.

Another direct result of these cost overhead concerns is a phenomenon called catastrophic forgetting. Catastrophic forgetting is where the LLM loses track or forgets the context of the service interaction. As a result, the LLM may ask users to repeat steps, or even return to an early step in the service flow that has already occurred. Unfortunately, while limiting the amount of information shared with the LLM can result in a faster answer and help reign in costs, these concerns must be balanced with the need to deliver a satisfactory service experience.

4. Data Privacy, Security, and Regulatory Compliance

Any LLMs interacting directly with customers or processing IP-sensitive data must have robust security measures in place from the outset. Typical best practices like implementing encryption protocols, access controls, and regular security audits help safeguard sensitive data. A number of leading cloud infrastructure providers have begun offering dedicated, secure hosted instances of LLMs where all customer data and IP will be safeguarded from third parties.

Additionally, complying with relevant regulations such as user consent, clearly labeling AI-powered interactions, and complying with laws like GDPR or CCPA is essential to maintain customer trust.

On that note, if your users interact directly with an experimental LLM-powered chatbot, be sure to explain the experimental nature of this interaction to all users. Set expectations in advance. This includes carefully managing expectations with both management and end-users. Build in as many safety guardrails as possible. Before deploying, test the refine your LLM as much as possible. Then, once deployed, monitor the LLM’s performance and tweak it as needed. This is a relatively new space, and LLMs have been known to say some wildly off-topic and even inappropriate things to users.

The regulatory environment around AI solutions is rapidly changing and will continue to evolve as generative AI emerges into the mainstream market. While it is always a best practice to maintain transparency with your users, many state regulators are also considering laws related to data sourcing, ethics, and more. Pay attention to emerging regulations, and remember that regulations will vary substantially from region to region.

5. Building Trust and Overcoming Resistance

Introducing LLMs into customer service may encounter resistance from employees and customers who perceive AI as threatening human jobs or fear impersonal interactions. Building trust and overcoming these concerns require open communication and transparency. Educating employees and customers about the benefits of AI, and emphasizing how LLMs enhance rather than replace human interactions, can alleviate fears and foster acceptance. When communicating with internal stakeholders, remember to focus on the value, not the technology. Whereas LLMs will sound foreign and AI may sound scary to some, nearly all will welcome helpful service that will get you answers faster.

To conclude, LLM can indeed deliver significant value to customer service operations, but they also present challenges that must be addressed.  By proactively addressing these challenges, service leaders can harness the power of LLMs to deliver exceptional customer experiences, increase efficiency, and stay competitive in a rapidly evolving market. Embracing this technology with a strategic mindset and a customer-centric approach will pave the way for a brighter future where LLMs and human agents work together harmoniously, creating unparalleled service experiences.

Jon Burg, Head of Strategy

Jon Burg, Head of Strategy

Jon Burg Led product marketing for Wibiya and Conduit, bringing new engagement solutions to digital publishers, in addition to launching Protect360, the first big-data powered mobile fraud solution. With 15 years of delivering value for several other technological brands, Jon joined TechSee to lead its product marketing strategy.
RELATED ARTICLES

RELATED ARTICLES

Innovative Help Desk Solutions That Improve Your Customer’s Experience
Contact Center

Innovative Help Desk Solutions That Improve Your Customer’s Experience

ContentsThe Importance of Customer ExperienceCustomer Self-Service: an ideal solution for …

3 Methods to Capture the Promise of Technology in Call Center BPO Offerings
Contact Center

3 Methods to Capture the Promise of Technology in Call Center BPO Offerings

Learn how some of the world’s leading BPOs have turned the threat of a new BPO technology trend into a strategic secret weapon.

computer vision ai
Contact Center

Computer Vision AI: The Secret of Successful Contact Centers

As the capabilities of Computer Vision AI grow, contact centers are finding innovative ways to improve customer service delivery.