GPT & Generative AI Q&A with Observe.AI’s CEO and Chief Scientist

GPT & Generative AI Q&A with Observe.AI’s CEO and Chief Scientist

Observe.AI CEO Swapnil Jain and Chief Scientist Jithendra Vepa Answer Your Generative AI Questions

By now, Generative AI and GPT likely need no introduction. We’ve written about it extensively, as has just about the rest of the world. 

However, sometimes it can seem like there are more questions than answers.

In fact, in April, we invited contact center leaders from our customers to our headquarters for a day-long event to answer questions and discuss Generative AI. Get a full recap of GPT day here.

One thing is clear, Generative AI for the contact center is not going away—and contact center leaders are eager to embrace it.

So, a few weeks later we held an interactive Q&A session with our CEO Swapnil Jain and Chief Scientist Jithendra Vepa

We’ve rounded up the top 8 questions they answered:

Q: What is Generative AI and why does it matter for contact centers?

Jithendra Vepa, Chief Scientist at Observe.AI: “Generative AI is a set of algorithms that fall under the broad category of machine learning. In the simplest terms, Generative AI is capable of generating new, realistic content, such as text, audio, images or videos. 

This technology is reshaping the way we think about these technologies like nothing else I’ve seen in my past 24 years of working in this field.”

Swapnil Jain, CEO of Observe.AI: “This is an iPhone moment. Generative AI is here to stay and it’s going to change the way we do everything. 

Now you can generate content. You can generate your customer service response or an entire website or code for engineers. I just used ChatGPT to generate an itinerary for an upcoming vacation, and it gave me superb answers.

I want to empower our customers and enterprises to get the latest and best of Generative AI in the contact center.”

Q: Can you explain the technology Generative AI is built on?

Vepa: “When you start typing into Gmail or Google and it starts suggesting the next set of words, these are all powered by language models. They do the task of predicting what word comes next. 

They’ve existed for a long time, but now we’re seeing these language models becoming huge. We call them large language models (LLMs).” 

Jain: “I see large language models like a genius kid. They might learn from school or from Wikipedia and they can tell you everything that they’ve learned. But now imagine you teach this kid everything that’s on the Internet. Every Wikipedia page, every article or blog, every email. They will eventually know so much they can always predict the next word or provide an answer to any question.” 

Q: What should contact center leaders know about the existing LLMs out there?

Jain: “The existing language models are trained on entire data sets, the entire Internet corpus. They’re really good at general purpose use cases. Very good at language capabilities. They’re fluent and coherent in nature. 

The problem is you don’t have control over general purpose language models. You can’t give it feedback or control the outcome. This makes generic LLMs not a good solution for enterprises. You need domain-specific models.”

Vepa: “Hallucinations are the main limitations at this point. If these generic LLMs generate incorrect facts, they will really impact the trust of users. For enterprise use cases, trust and accuracy is obviously really important. You need to be able to customize some of the model to meet customer requirements. This is the current challenge with black box models.”

Q: What is the benefit of domain-specific models for contact centers?

Vepa: “Domain specific LLMs are trained by feeding the model domain-specific data. For contact centers, for example, this would be contact center data or conversations. Bloomberg just announced their GPT model BloombergGPT, trained specifically on financial data.”

Jain: “Think about a Tesla. If you’re only training Tesla’s autopilot system in the suburbs, but then throw it into the middle of New York City, it’s not going to perform the same.

When we talk about domain specific training, we limit the domain to a particular area to make it more effective.

With a generic application, you get a response and you live with it. It’s a very black box response. If you cannot give feedback to something, you cannot control it, how do you trust it? If you deploy something like this in your enterprise, imagine the damage it can cause. You need a system that you can improve upon. A system that you can give feedback to and once you have that, you can trust it. And that’s why domain-specific, smaller LLMs are the solution.

Q: How do you think about Generative AI contact center use cases?

Jain: “We see use cases segmented into three buckets: Pre-Interaction, During Interaction, and Post-Interaction.

  • Pre-Interaction is before a customer even interacts with a human. This could be chat bots or voice bots.
  • During interaction is the next generation of agent assist, a true co-pilot that is less reul-based.
  • Post-Interaction would be the ability to take all of these conversations and ask the system a question like “What is the main call driver for calls in the last 10 days?” or “Can you tell me why my CSAT is going up?”

This is so relevant for contact centers, more than any other part of an enterprise.Contact centers are all about conversations, which is exactly what Generative AI is capable of analyzing and impacting. Contact centers are going to be the first thing which will see the biggest impact from GPT in the next weeks to come.”

Q: What is Observe.AI doing to leverage Generative AI for contact centers? 

Jain: “We have a launch planned mid-June where we will be announcing some of these things in all three of these buckets: Pre-interaction, during interaction, and post interaction. It will really challenge our thinking because we fundamentally believe all parts of the conversation are going to change.

In everything we do, we overemphasize this concept of calibration that we’ve talked about previously. 

When we launch a new product or feature, we have a full framework that allows the customer to calibrate the machine and make sure the outputs are in line with what they expect. You can give feedback to the machine and the machine learns and improves, which builds trust. We don’t let AI just go out there and start doing things on their own.”

Q: How should contact center leaders be communicating about Generative AI?

Jain: “Generative AI is the talk of every boardroom. Every CEO is expecting their CIO or CTO to come up with their own AI strategy, LLM strategy, GPT strategy because of the relevance in the contact center. Every VP of operations or contact centers or customer services are expected to put together this strategy. 

My recommendation is to embrace this. This is the new world. This is here to stay.

The beauty of this is that it’s not a binary shift. You can try it out and get your hands dirty with this technology and you can always reverse it if it doesn’t work out. 

You want to be the company whose legacy is that you accepted and adopted it. Either you will do it and go to your CEO and say you are doing it, or your CEO will come to you and ask you for it.

Q: How can contact center leaders get started with Generative AI?

Jain: “Pick a simple use case, like note summarization or NLP-driven insights or a use case based on a simple basic knowledge base case. All of these solutions are available within Observe.AI today. Reach out to us, we’ll demo some of those capabilities with you. Pick one, don’t pick more than one. Even though we would love for you to use all of our product capabilities. Start small, prove it, get your hands dirty and expand it.

Interested in learning more about Generative AI for Contact Centers?

We have resources and answers for you:

No items found.
Want more like this straight to your inbox?
Subscribe to our newsletter.
Thanks for subscribing. We've sent a confirmation email to your inbox.
Oops! Something went wrong while submitting the form.
Michael Lowe
Head of Content
LinkedIn profile
May 3, 2023