Skip to content

Mission: Generate

Chatting with Bots

In today's episode we're going to talk about chatbots, a use case for Gen AI that everyone thinks they know already but which few understand. Chatting with a large language model is how the world was first introduced to the potential of this technology and it’s still the preferred interface for many solutions. But we're going to examine the potential for this style of interaction and show why there are opportunities for a diversity of businesses but also many potential risks.


Show Notes:

AWS’s blog on new Lex features for building chatbots: https://aws.amazon.com/blogs/machine-learning/elevate-your-self-service-assistants-with-new-generative-ai-features-in-amazon-lex/ 

Amazon Q, a generative AI assistant: https://aws.amazon.com/q/ 

Andrej Karpathy on how hallucinations are a feature, not a bug, and not going away: https://twitter.com/karpathy/status/1733299213503787018 

 

Official Transcript:

Ryan Ries:

Welcome to Mission Generate, the podcast where we explore the power of AWS and the cutting-edge techniques we use with real customers to build generative AI solutions for them. Join us each episode as we dig into the technical details, cut through the hype, and uncover the business possibilities.

I'm Ryan Ries, our chief data science strategist and your host.

In today's episode we're going to talk about chatbots, a use case for Gen AI that everyone thinks they know already but which few understand. Chatting with a large language model is how the world was first introduced to the potential of this technology and it’s still the preferred interface for many solutions. But we're going to examine the potential for this style of interaction and show why there are opportunities for a diversity of businesses but also many potential risks.

Let's dive in.

As we enter 2024, we start as always with a disclaimer that this is not the actual doctor Ryan Ries. For one thing, I never call myself doctor, okay? My ego is not that big. Yes, hard as that may seem to believe, this is a synthetic voice generated via natural language processing, reading from a script written by our senior product marketing manager and my co-host, Casey Samulski. If you're just tuning in, welcome and enjoy.

Okay, back to the show.

Let's start with the basics. Chatbots are really just a textual interface for interacting with a large language model. You enter text into a UI, the model predicts the answer you are expecting and gives its response. In a sense, you are now conversing with your tools.

Let's explore this perspective. Imagine it’s your first time working with a new business analytics software your company has just purchased. You know it's powerful. You know that, in theory, it can retrieve the data you're looking for. But the interface is unfamiliar and foreboding.

In the old world, this type of skills gap would be supplemented by things like tutorials, an in-app tour flagging what different UI elements are for and how they work, a helpdesk, and perhaps a live chat. But what if you could converse with the product itself? What if you could ask it to explain itself, how it works, what its capabilities and limitations are, or maybe even suggest to it what workflow you're trying to accomplish and have it go and do that work on your behalf?

The possibilities here are quite broad. And so are the opportunities for disruption.

Let's go through a few current customer examples we're working on. A real estate platform is using chatbots to help customers search and sift through listings. A property maintenance company is reducing service calls by having a chatbot troubleshoot and triage issues. A research firm is offering chatbots to let their customers ask questions of the data collected by their research projects. A cybersecurity company is augmenting their Tier 1 customer support with a chatbot to scrub sensitive customer data from interactions and protect secrets.

So as you can see, there are many industries that can benefit from these types of solutions, and the possible applications are very diverse. What this also means is that a familiar interface or interaction style is no longer the moat it once was, and companies which don't seize on this opportunity may soon find themselves out-performed by interfaces leveraging AI interactions.

But that doesn't mean it's all sunshine and rainbows either. What happens when your chatbot answers a customer support query with a hallucination, an answer that sounds correct, but is either useless or may even damage some part of their system? What happens when you ask a chatbot a complex business intelligence question and it simply makes up believable sounding statistics to support its answer. And what if you made important business decisions on the basis of that answer?

Every chat interface has to contend with this problem. The risk of hallucination, in some environments, can be downright catastrophic. And without the right guardrails, you may also be at risk from adversarial users, who through prompt injection or other techniques will try to get your models to say or do something that compromises your business's integrity.

In reality, we can divide chatbot solutions into two camps: internal-facing tooling and lower risk, and external-facing and higher risk. This is a useful way to decide as a business how well trained and controlled you need your solution to be.

Now, there are ways to mitigate these types of risks.

At Mission, we've come up with an architectural approach that greatly increases the accuracy and flexibility of a model when answering questions like this. We call it, Ragnarock, which stands for a combination of retrieval-augmented generation, neoteric agent, and Amazon Bedrock. We'll talk about Ragnarock more in a future episode, but here's why we think it's a great fit for chatbot solutions.

The Ragnarock approach takes common queries you'd expect your chatbot to handle and multiplies them by using generative AI to create a large number of phrasing variants of that question. Think of it like this: one question, asked fifty different ways. We then embed those variants into the model itself as part of the vector database powering the retrieval-augmented generation. So in addition to leveraging whatever data or information you'd like your chatbot to reference during conversations, we also help the model by teaching it how to rephrase a user's prompts to distill their semantics and use that to generate results more accurately and quickly.

This is the kind of optimization you'll have to start thinking about if you want to empower your product with a chatbot. You need to give it the information it needs to make informed answers to a user's question—but you also need to give it the guideposts that help it identify what the prompt is really asking of it. You need it to notice when it lacks the information required to answer. And you may need it to disengage when a user may be trying to get answers they shouldn't have or misuse the bot for their own purposes. And that's what Ragnarock helps with.

That's it for this episode. But as always, let's end with a pitch for my team.

You may be at a place where you've realized your company could benefit from having a chatbot incorporated, but you don't know where to start. Or perhaps you've tried building one only to get back lackluster or misleading answers. If that's you, you've still got a great opportunity here.

My team enjoys using the Ragnarock approach to take chatbots and make them viable for all types of businesses and we'll give you an hour of our time, free of charge, just to talk through your ideas and suggest how we might go about designing a solution.

So why not drop us a line. Visit us at Mission Cloud dot com, We'd love to hear from you and find out what you're working on.

Okay, that's it. Thanks for listening. As always, good luck out there and happy building!

Subscribe to the Generate Podcast

Be the first to know when new episodes are available.