Integrate Twilio ConversationRelay with Twilio Flex for Contextual Escalations
Time to read: 5 minutes
Integrate Twilio ConversationRelay with Twilio Flex for Contextual Escalations
Businesses are increasingly turning to virtual assistants to handle routine inquiries, but there are times when human touch is necessary. This is where the integration of Twilio ConversationRelay with Twilio Flex comes into play. By enabling contextual escalations from a virtual assistant to a live agent, businesses can ensure that complex customer issues are handled efficiently and effectively.
This is a followup blog post of Voice AI: Building Voice Bots with Twilio's ConversationRelay, which details what ConversationRelay is and provides a sample implementation of it with an interactive application. Once you have that application built, we will expand the functionality in this post.
In this blog post, we'll explore how you can leverage Twilio's powerful tools—ConversationRelay, Functions, and our contact center Flex—to create a seamless transition from automated to human interaction, enhancing the overall customer experience.
Prerequisites
- Have a working environment of the application mentioned in the blog post above.
- Access to a Twilio Account in order to host Flex and a serverless function.
- Access to Twilio Flex and the ability to deploy plugins.
You can find a working example of the modified ConversationRelay sample application, a Flex plugin and serverless Function on this branch: https://github.com/midshipman/owl-shoes/tree/rbangueses/flex-integration
Building Blocks
Integrating Twilio ConversationRelay with Flex is relatively quick and straightforward. In this approach, we will configure the LLM so that it executes a function every time it understands the user wants to escalate the conversation to a human being.
Then, our application will detect that the tool (function) has been used and will instruct ConversationRelay to complete the session. ConversationRelay will then make a callback to a serverless function that will return TwiML, Twilio’s Markup Language.
This TwiML will instruct Twilio to enqueue the call to the right Taskrouter Workflow with additional data as task attributes, such as the call summary and sentiment. This will result in a call being queued and subsequently sent to the right agent.
In summary:
- Twilio ConversationRelay will make a callback to the action URL with call information from ConversationRelay once the bot does the hand-off.
- A Twilio Serverless function will be used as the target of the action from ConversationRelay. This function will receive the callback and enqueue the call to Flex with the context as task attributes (for example summary, sentiment, reason).
- We will configure the AI so it knows when to hand-off. In our example we use an OpenAI GPT model so we will create a GPT tool or function to handle escalations. The tool also defines what to generate ( call summary, sentiment, etc.) and pass on the callback.
- There is logic in our Application to end the ConversationRelay session. This will result in ConversationRelay making a callback to the configured action URL.
- A Twilio Flex Plugin will be required to present these new task attributes to the agent, so the agent has the context of the escalation and other potential details.
In the next sections we will provide samples for each of these building blocks.
ConversationRelay sample implementation
In our scenario, we use the ConversationRelay sample application to return TwiML when a call comes into the configured Twilio number. This TwiML will connect the call to ConversationRelay and set the preferred parameters as well as the action parameter.
This means that when the ConversationRelay receives a type:end
message on the websocket, it will make a callback to the url defined on the action
parameter.
Serverless target function
The action
parameter needs to point to a function that will receive the callback from ConversationRelay, add the data as task attributes, and enqueue the call to Flex. One way of doing this is by enqueueing to a specific Flex Workflow SID.
The next step is to create a serverless function. Below, we see an example where we will receive 3 parameters from the ConversationRelay callback, namely reason
, callSummary
, and sentiment
.
LLM configuration
In our scenario, we use OpenAI GPT as our Large Language Model. One way of getting it to escalate to a human being is by using a tool as a means to identify the escalation intent. As an example, here’s a dummy function that will be executed when the AI understands the intent to escalate to a human. Here’s an example:
This function needs to be added to the function manifest that can be found in https://github.com/midshipman/owl-shoes/tree/main/functions . Note that here we instruct the AI on what parameters are required, as well as their definition.
Here’s a sample definition where we instruct the AI to capture the callSid
, callSummary
, and sentiment
based on the description we provide. The LLM will generate the answer to each of those fields based on the information it has from the prompt and conversation history.
Implement HandOff Trigger
At this point, we have ConversationRelay pointing to a function that will enqueue the call to Flex and a function that will understand the intent to escalate. Now, we need to create the trigger that will tell ConversationRelay to make a callback and close the session. In our reference app we can do this on text-service.js
:
Implement HandOff logic
Now the only thing missing is the execution of the handOff
method in the event of an escalation. We will capture the escalation intent as a GPT tool or function, therefore we can trigger the handOff
based on the identified tool. In our reference application, we can edit app.js
so that it will run the handOff
method previously defined:
Flex Plugin
At this stage, any call that gets escalated will create a task in Flex with call information within the task attributes. In order to surface this information to the agent desktop we need to create a plugin that presents these attributes to the agent. You can find a sample of a Flex plugin that implements this here: https://github.com/rbangueses/flex-cr-demo
Wrapping Up
In summary, this blog post walked us through the process of handing interactions over from ConversationRelay to Twilio Flex. It also showed how we can pass the context we need about why an escalation happened to an agent.
In the event you do not use Twilio Flex as your contact center platform, you can still follow the same logic for escalation, but the target function will need to be adapted to work with the software you use.
Whether you're enhancing your current systems or building new solutions, Twilio offers the adaptability to meet your unique needs. Consider how integrating Twilio's capabilities can elevate your customer engagement strategy. Reach out to learn more about how Twilio can support your goals.
Ricardo Bangueses is a Principal Solution Engineer at Twilio with a strong background in the Customer Engagement and Contact Center industry. He can be reached at rbangueses [at] twilio.com or LinkedIn .
Related Posts
Related Resources
Twilio Docs
From APIs to SDKs to sample apps
API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.
Resource Center
The latest ebooks, industry reports, and webinars
Learn from customer engagement experts to improve your own communication.
Ahoy
Twilio's developer community hub
Best practices, code samples, and inspiration to build communications and digital engagement experiences.