How to Build a Wikipedia AI Assistant on WhatsApp with Python, LangChain, OpenAI, and Twilio
Time to read: 11 minutes
LangChain is a powerful framework that allows developers to build applications powered by language models like GPT. By using LangChain with OpenAI, developers can leverage the capabilities of OpenAI’s cutting-edge language models to create intelligent and engaging AI assistants.
One of the key benefits of using LangChain with OpenAI is the ability to create data-aware and agentic applications. This means that the AI assistant can connect with other data sources and interact with its environment effectively. For example, by connecting OpenAI’s language models with Wikipedia, the AI assistant can provide real-time answers to user’s questions based on up-to-date information from Wikipedia.
In this tutorial, you'll learn how to build an AI chatbot that leverages Wikipedia data using the LangChain framework with the OpenAI API wrapper. The bot can engage with customers on WhatsApp.
You'll start by setting up the backend using FastAPI and SQLAlchemy to create a PostgreSQL database to store your customers' data. Then, you'll integrate Twilio's WhatsApp Messaging API, allowing customers to initiate conversations with your WhatsApp chatbot.
With Pyngrok, you'll put the FastAPI localhost on the internet with Python, making it accessible for the Twilio API to communicate with.
Finally, the core of this AI chatbot will be built using LangChain Agents with an OpenAI LLM model, one of the LangChain models modules.
Prerequisites
To follow this tutorial, you will need the following prerequisites:
- Python 3.7+ installed on your machine.
- PostgreSQL installed on your machine.
- A Twilio account set up. If you don't have one, you can create a free account here.
- An OpenAI API key.
- A smartphone with WhatsApp installed to test your AI chatbot.
- A basic understanding of FastAPI, a modern, fast (high-performance), web framework for building APIs with Python 3.6+.
- A basic understanding of what an ORM is. If you are not familiar with ORM, we recommend you read this wiki page to get an idea of what it is and how it works.
Setting up your development environment
Before building the chatbot, you need to set up your development environment. Start with creating a directory and a new virtual environment:
Here, you create the wikipedia_ai_assistant directory and navigate into it. Then you create a new Python virtual environment using venv
. Finally, you activate the environment and then upgrade pip
, the Python package manager.
Next, create a requirements.txt file that includes the following:
Here is a breakdown of these dependencies:
fastapi
: A package for FastAPI, a modern web framework for building APIs with Python 3.7+ based on standard Python type hints. It's designed to be easy to use, fast, and to provide automatic validation of request and response data.uvicorn
: A package for Uvicorn, a fast ASGI server implementation, using the websockets library for long-polling connections, and based on uvloop and httptools.langchain
: An innovative framework designed to develop applications powered by language models like GPT. The LangChain framework includes SQLAlchemy as a dependency, which is a Python SQL toolkit and Object-Relational Mapping (ORM) library. It provides a set of high-level APIs for connecting to relational databases, executing SQL queries, and mapping database tables to Python classes. So you don’t need to include SQLAlchemy in your requirements file because it’s already installed under the hood.openai
: A Python client for OpenAI, the research company that focuses on developing and advancing artificial intelligence. OpenAI offers various AI models, including the text-davinci-003 model, which is used in this tutorial to power the chatbot.twilio
: A Python helper library for Twilio, the cloud communications platform that allows software developers to programmatically make and receive phone calls, send and receive text messages, and perform other communication functions using its web service APIs.wikipedia
: A Python library for the online encyclopedia that provides free access to a vast collection of knowledge on a wide range of topics. LangChain uses this library to parse Wikipedia data.python-decouple
: A library for separating the settings of your Python application from the source code. It allows you to store your settings in an environment file, instead of hardcoding them into your code.psycopg2-binary
: A Python package that provides a PostgreSQL database adapter for the Python programming language.python-multipart
: A library that allows you to parse multipart form data in Python, which is commonly used to handle form submissions that contain files such as images or videos. In the case of this tutorial, it will be used to handle form data from the user's input through the WhatsApp chatbot.pyngrok
: A Python wrapper for ngrok, a tool that allows you to expose a web server running on your local machine to the internet. You'll use it to test your Twilio webhook while you send WhatsApp messages.
Now, you can install these dependencies:
Configuring your database
You can use your own PostgreSQL database or set up a new database with the createdb
PostgreSQL utility command:
In this tutorial, you will SQLAlchemy to access the PostgreSQL database. Put the following into a new models.py file:
This code sets up a connection to a PostgreSQL database using SQLAlchemy and creates a table named conversations. Here's a breakdown of what each part does:
URL.create
creates the URL object which is used as the argument for thecreate_engine
function. Here, it specifies thedrivername
,username
,password
,host
,database
, andport
of the database.create_engine
function creates an engine object that manages connections to the database using a URL that contains the connection information for the database.sessionmaker
is a factory for creating Session objects that are used to interact with the database.declarative_base
is a factory function that returns a base class that can be subclassed to define mapped classes for the ORM.- The
Conversation
class is a mapped class that inherits fromBase
and maps to theconversation
table. It has four columns:id
,sender
,message
, andresponse
.id
is the primary key column,sender
is a string column that holds the sender phone number the message is sent from,message
is a string column that holds the message text, andresponse
is a string column that holds the response message that will come from OpenAI. Base.metadata.create_all
creates all tables in the database (in this case, it creates the conversations table) if they do not exist.
So the goal of this simple model is to store conversations for your app.
Note: here, you've used decouple.config
to access the environment variables for your database: DB_USER
and DB_PASSWORD
. You should now create a .env file that stores these credentials with their associated values. Something like the following, but replacing the placeholder text with your actual values:
Building your first LangChain Agent
LangChain Agents are a great way to build apps that AI chatbots can access and process information from the real world using a Large Language Model. A Large Language Model (LLM) helps these agents figure out what actions to take and in what order.
Agents can be very powerful when they are used the right way. The goal of this section is to show you how to use agents quickly by using the simplest API with just one tool (a Wikipedia tool).
You need to understand the following concepts before you implement LangChain agents:
- Tool: A function that does a certain job. This can be a Google Search, a lookup in a database, the Python REPL, or another tool. In our case, we will use the Wikipedia tool so that we can search the web through its lens. At the moment, a tool's interface is a function that expects a text as an input and returns a string as an output.
- LLM: A language model which makes the agent work. In our case, we will use OpenAI LLM with the default model which is
text-davinci-003
. - Agent: The agent to be used. This should be a text that uses a supported agent class. Because this tutorial focuses on the basic, highest-level API, we will only cover a standard supported agent.
By understanding these ideas and using LangChain Agents properly, you can make an AI Wikipedia chatbot that is both powerful and easy to use.
Create a new file called agent.py and fill it with the following code to better understand LangChain agents:
Before you run this code, make sure to have the OpenAI API key in your .env file:
You can run the code now using python agent.py
on the terminal. You’ll see something like the following screenshot on the terminal:
As you can see, the bot has scraped data from Wikipedia pages about different chatbot topics that are currently trending.
Let’s break down what this code is doing.
After importing the necessary libraries and modules, we initialize the agent using the initialize_agent
function from the langchain.agents
module. The agent is initialized with the wikipedia
tool and an OpenAI
language model (LLM) with a temperature
of 0
. The agent is of type ZERO_SHOT_REACT_DESCRIPTION
and is set to be verbose
so that you can see the steps the bot has already made. You can set it to False
to just retrieve the final result or you can omit using it because False
is the default value.
The last line of the code runs the agent with the input "What are the current trending topics related to chatbots?"
. This will cause the agent to use its tools and LLM to determine the best action to take in response to this input and return the result. In this case, the agent will use the wikipedia
tool (because it’s the only tool we passed to the agent) to search for information on current trending topics related to chatbots and return the result.
Now, let’s start building the chatbot and integrate the above into WhatsApp.
Creating your chatbot
Now that you have set up your environment and created the database, it's time to build the chatbot. In this section, you will write the code for a basic chatbot using OpenAI and Twilio.
Configuring your Twilio Sandbox for WhatsApp
To use Twilio's Messaging API to enable the chatbot to communicate with WhatsApp users, you need to configure the Twilio Sandbox for WhatsApp. Here's how to do it:
- Assuming you've already set up a new Twilio account, go to the Twilio Console and choose the Messaging tab on the left panel.
- Under Try it out, click on Send a WhatsApp message. You'll land on the Sandbox tab by default and you'll see a phone number "+14155238886" with a code to join next to it on the left and a QR code on the right.
- To enable the Twilio testing environment, send a WhatsApp message with this code's text to the displayed phone number. You can click on the hyperlink to direct you to the WhatsApp chat if you are using the web version. Otherwise, you can scan the QR code on your phone.
Now, the Twilio sandbox is set up, and it's configured so that you can try out your application after setting up the backend.
Before leaving the Twilio Console, you should take note of your Twilio credentials and edit the .env file as follows:
Setting up your Twilio WhatsApp API snippet
Before setting up the FastAPI endpoint to send a POST request to WhatsApp, let's build a utility script first to set up sending a WhatsApp message through the Twilio Messaging API. You’ll also add the LangChain logic.
Create a new file called utils.py and fill it with the following code:
In the code above, first, the necessary libraries are imported, which include the logging
library, the Twilio REST client, the decouple
library used to store private credentials in a .env file, and the previous imports you did above for the LangChain agent code.
Next, the Twilio Account SID, Auth Token, and phone number are retrieved from the .env file using the decouple
library. The Account SID and Auth Token are required to authenticate your account with Twilio, while the phone number is the Twilio WhatsApp sandbox number.
Then, a logging configuration is set up for the function to log any info or errors related to sending messages. If you want more advanced logging to use as a boilerplate, check this out.
The meat of this utility script is the send_message
function which takes two parameters, the to_number
and body_text
, which are the recipient's WhatsApp number and the message body text, respectively.
The function tries to send the message using the client.messages.create
method, which takes the Twilio phone number as the sender (from_
), the message body text (body
), and the recipient's WhatsApp number (to
). If the message is successfully sent, the function logs an info message with the recipient's number and the message body. If there is an error sending the message, the function logs an error message with the error message.
The search_wikipedia
function takes a query
as input and uses the LangChain API to search Wikipedia for information related to the query. The logic inside is the same as discussed in the agent code except that you no longer need it to be verbose. It is also more generalized so that the user can enter their desired input. This function is an abstraction and you will use it in your FastAPI application to replace the query
input with the user reply on WhatsApp.
Setting up your FastAPI backend
To set up the FastAPI backend for the chatbot, navigate to the project directory and create a new file called main.py. Inside that file, you will set up a basic FastAPI application that will handle a single incoming request:
To run the app, run the following command:
Open your browser to http://localhost:8000
. The result you should see is a JSON response of {"msg": "up & running"}
.
However, since Twilio needs to send messages to your backend, you need to host your app on a public server. An easy way to do that is to use Ngrok.
If you're new to Ngrok, you can consult this blog post and create a new account.
Leave the FastAPI app running on port 8000, and run this ngrok command:
The above command sets up a connection between your local server running on port 8000
and a public domain created on the ngrok.io website. Once you have the Ngrok forwarding URL, any requests from a client to that URL will be automatically directed to your FastAPI backend.
If you click on the forwarding URL, Ngrok will redirect you to your FastAPI app's index endpoint. It's recommended to use the https
prefix when accessing the URL.
Configuring the Twilio webhook
You must set up a Twilio-approved webhook to be able to receive a response when you message the Twilio WhatsApp sandbox number.
To do that, head over to the Twilio Console and choose the Messaging tab on the left panel. Under the Try it out tab, click on Send a WhatsApp message. Next to the Sandbox tab, choose the Sandbox settings tab.
Copy the ngrok.io forwarding URL and append /message
. Paste it into the box next to WHEN A MESSAGE COMES IN:
The complete URL should look like this: https://d8c1-197-36-101-223.ngrok.io/message
.
The endpoint you will configure in the FastAPI application is /message
, as noted. The chatbot logic will be on this endpoint.
When done, press the Save button.
Sending your message with OpenAI API
Now, it's time to create the logic for sending the WhatsApp message to the OpenAI API so that you'll get a response from the AI chatbot.
Update the main.py script to the following:
Here, you've set up the /message
endpoint so that the app will listen to the incoming POST requests to that endpoint and generate a response using the OpenAI API and text-davinci-003 model.
The code imports several third-party libraries, including openai
, FastAPI, decouple
, and SQLAlchemy. It also imports two modules defined in the same directory: models.py
and utils.py
.
A get_db()
function is defined as a dependency using the Depends
decorator from FastAPI. This function creates a new database session using the SessionLocal
function from models.py
and yields it to the calling function. Once the calling function completes, the database session is closed using the finally
block.
The main function of the code is the reply()
function, which is decorated with the @app.post('/message')
decorator. This function takes in a message body as a parameter and a database session object obtained from the get_db()
dependency.
The function extracts the phone number of the sender from the incoming request by accessing the 'From'
key of the form data and splitting it on the "whatsapp:"
string. It then prints a message indicating that it is sending a response to this phone number.
The function then calls the search_wikipedia
function with the Body
argument as an input and stores the result in the langchain_response
variable. This will cause the LangChain agent to search Wikipedia for information related to the query contained in the Body
argument and return the result.
The function then attempts to store the conversation in a database by creating an instance of the Conversation
class with the sender, message, and response as arguments. It adds this instance to the database session using the add
method and commits the changes using the commit
method. If an error occurs while storing the conversation in the database, such as an instance of the SQLAlchemyError
class being raised, then the changes are rolled back using the rollback
method and an error message is logged.
Finally, the function calls the send_message
function with the phone number and LangChain response as arguments to send a message to the sender containing the LangChain response. The function then returns an empty string.
Testing your AI chatbot
Now, you're ready to send a WhatsApp message and wait for a response from your AI Wikipedia assistant. Try asking the AI chatbot anything you can ask when you search on the web to get Wikipedia results.
The example below shows a couple of questions and their responses:
Now, your AI chatbot is functioning well on WhatsApp. Perhaps your next step is to make it live in production using a VPS instead of building it locally. I hope you enjoyed this tutorial and see you in the next one.
Ezz is a data platform engineer with expertise in building AI-powered chatbots. He has helped clients across a range of industries, including nutrition, to develop customized software solutions. Check out his website for more.
Related Posts
Related Resources
Twilio Docs
From APIs to SDKs to sample apps
API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.
Resource Center
The latest ebooks, industry reports, and webinars
Learn from customer engagement experts to improve your own communication.
Ahoy
Twilio's developer community hub
Best practices, code samples, and inspiration to build communications and digital engagement experiences.