X close

Interested in a demo?

Let's schedule a call!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Customer Support: from Findability to Promptability

gpt-3 based customer support agent
gpt-3 based customer support agent

Bridging the gap between Generative AI and the support experience

ChatGPT is causing waves in the enterprise world, just like it’s doing in practically every other field. It’s promising nothing short of a revolution! Generative AI (like ChatGPT) and Large Language Models (LLMs) in general have been showing some amazing applications in various fields from writing code to developing literature.

In the customer support world we’re already seeing applications of LLMs, mainly for answering questions for the support of technically simple products where internet-wide based answers could be enough, and users are more forgiving of issues in response accuracy.

In a recent poll we ran, 78% of customer support leaders believe that AI will replace support agents solving issues on complex systems within 5 years, and 41% say it will happen in under 2 years! In other words, these leaders believe that ChatGPT-like systems will be able to generate precise enough responses to solve customer problems totally autonomously in this time period.

GPT Poll

This could be great news for enterprises, both because better knowledge capabilities are vital for their success, and because this can potentially enable cheaper and more accurate customer support. Knowledge-based organizations are constantly looking for ways, tools, and platforms that can give their support teams the edge over the competition, deliver the best CX, and remove friction from finding answers and resolving issues to the utmost extent.

Yet, for customer support of technically complex systems, where deep knowledge of the system and possible solutions is required, the revolution doesn’t seem to have arrived just yet. 

Here’s what is missing to make the anticipated transformation a reality, and how you can plug that gap. 

What are large language models?

First, let’s cut through the hype and define our terms. Every language model is a set of probabilistic weights that calculate when certain terms, phrases, and wording will appear in a text in relation to other terms. Large language models are exceptional for two reasons:

  • They are trained on enormous datasets
  • In addition to retrieving or classifying existing texts, they are able to generate new ones, hence the name generative AI

ChatGPT’s astounding capabilities are possible because it was trained on data from the entire internet, exposing it to pretty much all the information humankind has developed to this point. But the true secret of ChatGPT is in the way that dataset was built to train the model on the relationship between very large amounts of given contexts (prompts) and appropriate response texts.

With this understanding, we can start to examine the potential that LLMs offer to transform the fundamental idea of enterprise knowledge, and specifically knowledge retrieval, as well as the barriers that still stand in its way.

The Limitations of LLMs

Though LLMs are incredibly powerful human-like text generators, they are still limited to operating in the type of contexts they were trained on. 

An LLM is only as good as its training data, which is something that many people don’t seem to realize. An LLM can only produce relevant text - whether that’s a song, an answer to a customer support question, or a line of code - if it’s been trained on a dataset that includes examples of that type of text and those topics. 

In addition, the generative side of LLMs has a serious trustworthiness issue. As explained above, a large part of the strength of LLMs lies in their ability to draw on enormous training datasets for answers to questions. This is great for producing readable text, or even producing generic code (assuming a human is there to ensure its quality) but not so great for consistently generating correct answers and focused answers to domain specific issues, based on domain specific knowledge, especially in complex environments. 

Generative LLMs are trained to produce text in a human fashion, regardless of its factual correctness. They can even make up answers that are highly convincing but, critically, entirely wrong, otherwise known as hallucination

What can LLMs bring to the customer and employee experience?

We tend to think of enterprise knowledge as the sum of information items stored in various systems in the company, such as articles, emails, messages, and customer tickets. Until now, interaction with enterprise knowledge has been based entirely on information retrieval, or being able to quickly and efficiently find these relevant knowledge items. We term this “findability.”

The recent arrival of generative LLMs has triggered a transformation for this paradigm. Potentially, human-developed enterprise knowledge, rather than being a sole source for knowledge retrieval, will serve as a platform for new, machine-based, need-specific, knowledge generation. This change will have many implications for the roles of humans versus machines, because the machine will no longer be just a retrieval mechanism, but also a knowledge curator, and no longer just a librarian, but a ghost writer.

In terms of work processes, this has the potential of revolutionizing everything from information sharing in and between teams, to the customer self-service experience. Specifically in customer support, it’s clear that the days of reactive case solving in support are numbered. Whether it takes two years or ten, the writing is on the wall. LLMs like ChatGPT will be better at enterprise support than humans, and it’s going to happen faster than we expect. 

But the LLM revolution cannot happen without two critical elements.

How to develop ‘Enterprise Grade’ LLMs

To overcome the trustworthiness issues of LLMs, there are two elements that are absolutely necessary:

  1. Training LLMs on domain specific complexities

Any application of LLMs in complex environments requires ‘fine-tuning’ of the language model to be able to deal with company specific information and types of data. This means ‘feeding’ the LLM with training data that can teach it both the domain specific information and the relations between the different elements in this information.

  1. Focus the sources of text generation

Even if fine-tuned, LLMs are still ‘black-boxes’ that can make up information very confidently. It is therefore critical that when generating text for a certain need (whether answering questions or developing knowledge) that the generated text be bound to relevant human-developed knowledge, so it doesn’t ‘run wild’.

This can be done using a two-phased process, in which the first step defines the boundaries of the knowledge items to be used in the context (using classic knowledge retrieval methods), followed by providing that context to the LLM as a prompt for text generation.

The future of customer support and CX with LLMs and xFind

xFind brings two critical capabilities that enable the Enterprise Grade LLM experience described above:

  1. xFind is specifically adept at developing clean datasets from very complex knowledge sources (such as past cases) that can be used continuously for LLM training and fine-tuning.
  2. xFind’s AI retrieval models set the stage for the LLM to work with the precise data it needs, by automatically engineering the prompt based on a cleaned-up version of customer interactions, and relevant retrieved knowledge items.

When combining the power of xFind and LLMs together, you can tap into all the benefits of a company-specific, precision-trained LLM that enables you to: 

  • Deliver precise proactive support for your customers to self-serve their issues, by providing specific answers to questions, rather than entire knowledge items.
  • Develop new, contextually-relevant responses on the fly, by summarizing past cases that solved similar problems.
  • Constantly generate relevant customer-facing knowledge based on existing tickets that do not in themselves constitute a full response, in cases of missing knowledge.

Remember, the cleaner your training data set and the better your initial retrieval, the better your generated text will be. Learn more about how xFind brings that secret sauce to the table, to power effective generative AI for enterprise customer support. Speak to us today.