Building AI into Customer Service Chat

AI and Chat for MSPs

People can’t stop talking about AI right now. It’s easy to see why, with large language models (LLMs) producing such amazing results. Need some ideas for a birthday present? Want a best-man speech? Want a summary of a film? Need some code written? With a simple prompt, you can get mind-boggling good responses.

The implications for Customer Service software are equally incredible. At POPX, we are exploring the power of the latest LLMs, and integrating them deeply into the POPX MSP Platform, powered by ServiceNow. So, I want to share with you a little of our implementation journey over a few blog posts. We are starting at the beginning with this first one!

Enhancing Virtual Agent with AI

Chat 4Most LLMs are optimised for chat. So when we decided to learn about LLMs, the natural place to start was integrating into ServiceNow Virtual Agent! Perhaps we could make a bot that users of the portal can talk to and do some case deflection?

Virtual Agent has support for running custom scripts, so it was straightforward to create a new topic that calls a script. That script handles the outbound communication to the LLM API. We chose OpenAI because of its excellent results, competitive pricing and ways to finetune the results. More on that later!

While the Virtual Agent block was quick to create, we immediately realised it needed a way to keep state. With OpenAI’s API at least, you must send the whole conversation each time, otherwise each message is treated like a new conversation. Prompts like “Can you summarise that last response please?” were being met with the digital equivalent of a confused stare. While Virtual Agent is stateless, you can store variables in the session, so storing the history was actually straightforward - though a little annoying that you need to create a new object each time.

Using context for meaningful customer interactions

Next on the list was a way to give the LLM some relevant, useful information. While it was wonderful chatting about what the moon was made of (not cheese, but rock and metal), we wanted it to act a little more like an MSP technician. How would it know about recent cases or the contents of our knowledge base?

There are a few ways you can teach an LLM, and the methods are improving all the time. Fine-tuning is one way, where you can upload training data ahead of time, by teaching it what good responses look like. However, this is tedious and time-consuming. Another, simpler way is with prompt improvement: each time you converse with the LLM, you send a bunch of useful information. For example, you could send the contents of the knowledge base on each interaction, which would provide the basis for the answers. 

This is not as ridiculous as it sounds. The latest models from OpenAI allow for (roughly) 30,000 words. And about the same length as George Orwell’s Animal Farm. Some models allow for even more information: Claude by Anthropic accepts more than double that.  However, including lots of information in each prompt both increases the cost and the time it takes to process.

Using Function Calling to send relevant data

However, OpenAI has developed an amazing solution: function calling. In short, function calling tells the model that it has access to some data sources and it just needs to ask to use them. We’ve created a few, very simple functions in our script, that allow the lookup of a particular case, if you provide a case number, and the search of a KB article. When you call the model we tell the model about these functions and, if it needs to use them, it will send a message back asking for their use, providing appropriate parameters. 

If this happens, the script will then provide the data and the model will provide its final response. Of course, this back-and-forth between the ServiceNow instance and OpenAI does increase the response time, but it does allow for some incredibly useful natural conversations. Someone who asks our chatbot about internet connectivity issues will be provided information sourced from our knowledge base, meaning it is more likely to be useful and accurate.

See it for yourself

Do you want to try our chat bot out? Get in touch, and we’ll give you a demo. In the meantime you can take a look at a video here:

In my next article, I want to talk about how we are developing an AI co-pilot.  Can we help agents be more efficient by giving them information and tools automatically, whilst they are helping with a case?

servicenow new logo1024-upThe POPX MSP Platform is powered by ServiceNow

Download the investor guide

Why Smart MSPs Deliver Greater Investor Returns