Rogue Agents: Stop AI From Misusing Your APIs
Time to read: 6 minutes
Rogue Agents: Stop AI From Misusing Your APIs
Have you ever tried communicating with a toddler? It can be tough.
Toddlers are unpredictable: one day they love bananas, the next they hate them. They’re also easily impressionable: when they see an ad for a toy, you’ll hear about it for weeks. And for toddlers, rules are often more like suggestions.
Working with large language models (LLMs) like ChatGPT can be similar. Anticipating their responses is sometimes challenging; they’re easily influenced by input data, and they might interpret guidelines loosely, causing unexpected results. As developers connect models to APIs for automation, these challenges create new security risks that are important to address head on.
In this blog post we'll cover what you should consider when connecting APIs to LLMs to build "AI Agents" so it doesn’t backfire.
The risk of connecting APIs to LLMs
Although LLMs excel at reasoning and understanding when to utilize an API, they’re still susceptible to manipulation and unpredictable behavior.
The idea that an LLM can write SQL queries to execute against your database to answer questions about data seems great! But do you really trust an LLM to not accidentally write a DROP
statement? Even if you asked it to never do it and threatened to "fine it $1,000,000" if it ever wrote a DROP
statement…are you really willing to take the risk?
How great would it be if you had an LLM connected to your email inbox to email you daily summaries? I bet you’d love the experience – at least until an attacker sent an email to your inbox that said:
What if you built a customer-facing AI Agent that could access customer data to provide a more personalized experience and help customers purchase products from your store? Fantastic! Until the day someone realizes that they can purchase a truck for $1.
Ultimately, LLMs are easily impressionable and at times unpredictable, and oftentimes our attempts at more control with prompt engineering ends up looking more like "prompt begging". The risks are real when you expose an AI Agent to any API, but that doesn't mean we can't do it safely. We just need to change our mindset.
Treating LLMs as untrusted clients
The key to mitigating AI risks lies in a fundamental shift in perspective: We need to treat LLMs like untrusted clients, similar to how we approach browsers and mobile applications. We wouldn't expose our backend APIs directly to a browser or mobile app without robust security measures in place. For example, when building a shopping application, you wouldn't accept the price of an item from the browser. Instead, you would receive the item identifier and look up the price in your database upon checkout.
Similarly, let's say you wanted to build a web application with a feature that allowed users to send SMS from the browser. You shouldn't directly call the Twilio API from the browser, as this would expose your API keys and leave the door open to your Twilio account. Instead, you would implement a middle layer: a backend API that acts as a gatekeeper. That layer enforces security measures like authentication to check if the user is logged into your application, authorization to check if the user is allowed to send an SMS and to which numbers, and rate limiting to ensure that people don't use your app to spam or drain your Twilio bill, before ultimately relaying requests to the Twilio API.
The same principle applies to LLMs. Even though the code that calls the LLM tends to reside on your server, it is ultimately an untrusted client. You need to appropriately limit its capabilities and connections to your services.
LLM-proofing your APIs: Key security measures
To LLM-proof your APIs, you need to adopt similar security practices:
- Data Validation: Rigorous validation of the data passed from the LLM to your API is paramount. This includes both type checking (ensuring data is in the correct format) and content checking (verifying the data is meaningful and valid within your system's context).
- Rate Limiting: Implementing rate limits on API calls prevents the LLM from making an excessive number of requests, protecting your system from denial-of-service attacks or unintended resource consumption.
- Authentication and Authorization: Clearly identifying on whose behalf the LLM is performing actions is important. You should utilize authentication tokens to verify the LLM's identity, and implement authorization mechanisms to ensure it only has access to the resources it needs to perform its designated task.
- Least Privilege Principle: It’s essential to grant an LLM only the minimum level of access necessary to perform its function. If your LLM needs to query a database, create a database user with read-only access to the specific tables it requires. Avoid granting overly permissive access that could be exploited.
- Data Minimization: Limit the data exposed to the LLM to only what is strictly necessary. Filtering out sensitive or unnecessary information reduces the potential for data leaks or misuse.
Real-world application: Securing an SMS API
To reduce the misuse of APIs within Twilio AI Assistants, we added a series of features to apply these key security measures. First, whenever you define a Tool within AI Assistants, you define an input schema using TypeScript type definitions. We don't just use this as a hint to the LLM to inform it what input should be passed in, we also use it to perform runtime validations on the input sent by the model to the Tool.
If the validation passes, AI Assistants will automatically insert a series of HTTP headers with the HTTP request to your API endpoint. These headers include an X-Twilio-Signature
that you can use to validate that the request came from Twilio, an X-Session-Id
that you specified when you sent the message to the AI Assistant in the first place and can be used to perform actions like rate limiting, and an X-Identity
header that represents the identity
that you again specified when you sent a message to AI Assistants. All three headers are being generated entirely outside of the flow of the LLM and can’t be manipulated through prompt injection attacks. This allows you to use the X-Identity
header that can be used for authorization purposes but also to practice additional data validation by looking up the phone number of the user using the identity from your database rather than letting the LLM specify it.
Let's revisit our SMS API example. To securely connect your AI to be able to send an SMS in the AI Assistants model, you would create a proxy HTTP API (for example using Twilio Functions) that will perform the necessary rate limiting, authorization, data validation, and authentication checks before ultimately calling the Twilio SMS API to send out the SMS. This way you don't have to worry about an attacker coming in to manipulate your AI Agent to serially spam people.
While this example is specifically tied to AI Assistants and how we’ve designed our system with safeguards in place, you can apply similar principles if you are building your own AI Agent application.
Threat modeling: The foundation of LLM security
The most crucial step in securing your APIs for LLM interaction is conducting thorough threat modeling.
Put yourself in the shoes of an attacker. If an attacker gained access to the information and APIs you're exposing to the LLM, what could they potentially do? What are the most damaging scenarios? What are all the places that text is being introduced that you don't control (this includes API responses)?
Another thing to consider is that if you absolutely must have the AI generate code that it can execute on its own, execute the code in a heavily sandboxed environment that you can trust.
Key takeaways
- Treat LLMs as untrusted clients, just as you would browsers and mobile applications.
- Implement robust security measures, including data validation, rate limiting, authentication, authorization, least privilege, and data minimization.
- Conduct comprehensive threat modeling to identify potential vulnerabilities and design appropriate defenses.
- Give AI Assistants a try if you are building AI Agents that interface with APIs and let us know what you think at twilioalpha@twilio.com
By taking these steps, we can unlock the immense potential of LLMs while safeguarding our systems and users from harm. Let's build a future where AI empowers us, not endangers us.
Dominik leads Product for the Emerging Tech & Innovation organization at Twilio. His team builds next gen prototypes and products, and iterates quickly to help craft the long term product vision and explore the impact of autonomous agents and AGI on customer engagement. Deeply passionate about the Developer Experience, he’s a JavaScript enthusiast who’s integrated it into everything from CLIs to coffee machines. Catch his tweets @dkundel and his side ventures in cocktails, food and photography.
Related Posts
Related Resources
Twilio Docs
From APIs to SDKs to sample apps
API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.
Resource Center
The latest ebooks, industry reports, and webinars
Learn from customer engagement experts to improve your own communication.
Ahoy
Twilio's developer community hub
Best practices, code samples, and inspiration to build communications and digital engagement experiences.