Create a AI Summarizer Bot with Ollama LangChain and Twilio
Time to read: 4 minutes
Drowning in information but starving for knowledge? Imagine having your own personal AI summarizer that distills lengthy texts into clear, concise summaries - running directly on your hardware, respecting your privacy, and accessible anywhere through a simple text message.
In this tutorial, I'll walk you through creating a powerful AI summarization tool that leverages Ollama's local language model capabilities, LangChain's flexible orchestration, and Twilio's seamless communication infrastructure to bring this vision to life over SMS.
Prerequisites
To follow with today’s project you’ll need:
The power of local LLMs
Yes, you can run Meta’s llama, Google’s Gemma, Microsoft’s Phi, and DeepSeek R1 for free on your own hardware.
How we do that is with an application called Ollama.
Ollama
Ollama allows you to run open-source large language models locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.
You can install Ollama with a single command on Linux:
Windows and macOS versions are also available on the downloads page.
Once you have it installed and running, you can pull a model by running the following command.
For a complete list of supported models and model variants, see the Ollama model library.


Running Gemma
To pull a Gemma model by Google, use the following command.
If you would like to test the model, you can interact with it with two different methods:
In the terminal:
- Run
ollama run gemma3
to start interacting via the command line directly
Via the API:
- All of your local models are automatically served on localhost:11434
- Send an
application/json
request to the API endpoint of Ollama to interact. By default, the answer streams back to the user.
Integrate with LangChain
In order to use our locally running Ollama model, I will be using the Python library of LangChain. So let's get started by creating a directory for our project.
Set up our environment
First, let's create a virtual environment and install the necessary packages:
Create our summarizer bot
Now, let's write the code for our summarizer bot. Create a file called `summarize.py`:
Let's break down what our summarize.py
code is doing:
Setting up connections: First, we load our environment variables and initialize our Twilio client for sending SMS messages. This method can also be used to send messages over WhatsApp or RCS. We also initialize our local Ollama model, specifically the Gemma model we pulled earlier.
Web content loading: We've added a `load_text()` function that uses LangChain's `WebBaseLoader` to fetch and process content directly from URLs. This means our summarizer can now work with articles and blog posts from the web.
Creating the prompt template: We define a prompt template that instructs the AI to act as an expert summarizer. This template includes placeholders for the text to be summarized and guidelines for keeping the summary concise (5-6 sentences).
Building the LangChain chain: We create a LangChain
load_summarize_chain
with the "stuff" chain type, which is optimized for document summarization. This creates a reusable pipeline for text summarization.Creating utility functions:
summarize_text()
: This function takes a text input, passes it through our summarization chain, and returns a cleaned summary.send_summary()
: This function sends the generated summary to a specified phone number using Twilio's SMS capabilities.
The power of this approach lies in its simplicity and modularity. LangChain allows us to swap out different models, modify our prompts, or add additional processing steps without rewriting the entire application. Meanwhile, running the model locally with Ollama means your data never leaves your machine during the summarization process.
Set up Twilio
To use Twilio for sending summaries, you'll need to:
Create a Twilio account
Note your Account SID and Auth Token
Create an
.env
file in your project directory:
Create a Simple Web Interface
Let's create a simple application using Streamlit to provide a web interface for our summarizer. Create a new file, and let’s name it streamlit.py
:
I like using Streamlit, since it provides an interactive and user-friendly interface, all with minimal code in Python. The above file not only creates the interface for our app, but also imports the functions from summarize.py
.Now to run our application, we have to make sure we install the streamlit package as well.In your terminal, run pip install streamlit
After installing streamlit, you can start the application by:


The sidebar contains a form where users can input a URL and (optionally) their phone number (hidden for privacy). When a URL is provided, the app automatically fetches the content, summarizes it, and displays the summary. If a phone number is provided, it also sends the summary via SMS.
You can test it out by adding a URL of an article or blog post you need a summary for and then add a number where the summary will be sent.


Take it further
Here are some ways to enhance your summarizer bot:
- Add support for different summarization styles (bullet points, executive summary, etc.)
- Implement a webhook to receive URLs via Twilio and return summaries automatically
- Create a scheduled service that summarizes news from your favorite sources daily
- Fine-tune your local model to improve summarization quality for specific domains
Conclusion
By combining Ollama, LangChain, and Twilio we've created a powerful, privacy-focused AI summarization tool that runs entirely on your own hardware. With our enhanced version that can process web content directly, you can quickly get summaries of articles, blog posts, and other web content with just a URL.
The best part? This is just the beginning. As open-source models continue to improve, your local summarizer bot will only get better with time. Happy summarizing!If you are interested in learning more about Ollama, LangChain, or other AI related technologies, check out my YouTube Channel at Rishab in Cloud.
Related Posts
Related Resources
Twilio Docs
From APIs to SDKs to sample apps
API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.
Resource Center
The latest ebooks, industry reports, and webinars
Learn from customer engagement experts to improve your own communication.
Ahoy
Twilio's developer community hub
Best practices, code samples, and inspiration to build communications and digital engagement experiences.