AI in 2035: Securing the Future of Customer Engagement
Introduction
Artificial Intelligence has a long history, spanning back to the 1940s – with foundations laid even earlier. And today, with the advent of generative AI (GenAI) powered agents, the landscape of AI is transforming more rapidly than ever, promising new advances and equally formidable challenges. By 2035, AI will not merely augment our existing technologies; it will redefine them. And, critically, it will redefine our understanding of – and approach to – security.
Twilio believes in the value of security in creating consumer trust, and we’re excited to work with technology partners like Lakera, a GenAI Security company, to build for a future of trusted, secure AI agents that unlock innovation while protecting users and sensitive data. In this post, we’ll draw on the real-world learnings that Twilio and Lakera benefit from. We’ll explore the current state of AI security, draw lessons from real incidents, assess industry responses, and look into the future—a future that hinges on our ability to protect and trust the AI systems shaping our world.
The state of AI Security today: A paradigm shift
The agentic era is here. AI agents are already helping power customer service, streamlining workflows, and even making critical decisions.
They interact with customers and access sensitive data on behalf of businesses, bringing both productivity gains and new categories of risk that traditional security frameworks weren't designed to address.
Two key aspects make AI security fundamentally different from traditional software.
First, AI systems are non-deterministic—they don’t follow fixed rules, and cannot be proven secure in the same way as programmatic systems. Instead, they operate dynamically, interpreting vast amounts of input in unpredictable ways.
Second, these systems have drastically expanded attack surfaces. Unlike traditional software vulnerabilities that require technical expertise to exploit, anyone capable of crafting a well-worded prompt can potentially manipulate an AI agent. With the democratization of LLMs has come the democratization of hacking. Security must evolve beyond code-level protections to encompass natural language interactions.
Together, these factors demand a new approach to security, one that can adapt as rapidly as AI systems themselves.
Key risks in AI security
Attacks on AI agents often resemble social engineering more than traditional hacking. The OWASP Top 10 for LLMs outlines the most pressing vulnerabilities in generative AI systems today, for example:
- Prompt Injection: Malicious inputs designed to bypass AI system safeguards, tricking models into executing unauthorized actions or revealing sensitive information.
- Sensitive Information Disclosure: AI systems may inadvertently reveal sensitive data, such as private user inputs, credentials, or internal configurations, due to insufficient safeguards.
- Improper Output Handling: Generative AI systems fail to moderate or sanitize outputs effectively, leading to harmful or inappropriate content being generated and shared.
- System Prompt Leakage: Internal configurations, system prompts, or other sensitive instructions are exposed, providing attackers with insights to craft targeted exploits.
- Training Data Poisoning: Attackers inject malicious data into training datasets, embedding biases or weaknesses that can be exploited after deployment.
- Excessive Agency: Systems with overly broad permissions or autonomy may execute unauthorized or unintended actions, creating unpredictable outcomes.
These vulnerabilities highlight the challenges posed by AI’s open-ended functionality, non-deterministic nature, and ability to accept natural language, audio, or visual inputs, which significantly broaden the attack surface.
Real-world failures: Lessons from recent incidents
The risks outlined in the OWASP Top 10 are not hypothetical—they are playing out in high-profile cases across industries. These real-world incidents provide vivid examples of how vulnerabilities in AI systems can lead to serious consequences:
- Prompt Injection Attack via “Imprompter”: In October 2024, researchers demonstrated a new prompt injection technique called “Imprompter,” which covertly manipulated AI chatbots to extract and transmit personal information, such as names, ID numbers, and payment details, to malicious actors. Tested on platforms like LeChat and ChatGLM, the attack revealed critical vulnerabilities in AI systems and underscored the urgent need for robust defenses against such exploits.
- AI Chatbot Blamed for Workplace Training Gaffe: In August 2024, an AI chatbot was used to generate a fictional case study for a psychosocial hazard training course at Bunbury prison. Unbeknownst to the trainer, the chatbot incorporated the name and details of a real former employee and alleged sexual harassment victim. This failure to moderate and sanitize generated content caused significant distress and underscored the risks of using AI in a sensitive context.
- AI-Generated Misinformation During Elections: A report by the Alan Turing Institute revealed that AI-generated content amplified conspiracy theories and harmful narratives during major global elections in 2024. This misuse of AI highlights the need for effective content moderation to prevent the spread of misinformation.
These examples illustrate how AI systems can fail, not only due to malicious intent but also as a result of inadequate safeguards and unpredictable behavior.
Looking ahead
The current state of AI security reveals a landscape full of promise but fraught with complexity. Protecting these systems is no longer about patching vulnerabilities after they’re found—it’s about anticipating new kinds of threats and building systems resilient enough to withstand them.
As we move toward a future of increasingly autonomous and interconnected AI agents, understanding these risks is the first step. In the next section, we’ll explore what AI security might look like in 2035 and the measures we must take to stay ahead.
Predicting the future: AI Security in 2035
The year 2035 is not just a distant marker but a rapidly approaching milestone in the evolution of artificial intelligence.
By then, we posit that AI will have matured into deeply interconnected systems of AI agents, powering everything from autonomous supply chains and predictive healthcare to self-learning financial advisors. However, this unprecedented integration brings equally unprecedented challenges: how do you secure a world dominated by autonomous agents and generative AI applications?
As these systems evolve into an Internet of Agents (IoA), key principles like trustworthiness, observability, and proactive risk management will be essential for navigating the security landscape of 2035. These principles provide the foundation for ensuring that interconnected autonomous agents operate safely, ethically, and effectively within increasingly complex ecosystems.
Recognizing these risks, governments and defense agencies are prioritizing AI security as a matter of national and global significance. Unlike earlier phases of cybersecurity, the stakes are far greater, with implications for critical infrastructure, geopolitical stability, and societal cohesion. Nations are investing heavily in AI security frameworks, while collaborations between the public and private sectors are laying the groundwork for standards that will define this new era of digital resilience.
Additionally, as AI becomes integral to society, the importance of regulations and ethical standards is escalating. Key legislative initiatives, such as the EU AI Act and the Blueprint for an AI Bill of Rights (US), aim to set clear guidelines for the development and deployment of AI technologies. These frameworks prioritize principles like transparency, safety, and accountability, directly influencing how AI systems are governed. By 2035, these regulations and collaborative efforts will likely define the operational boundaries for AI agents, helping ensure they remain aligned with societal values and ethical principles.
The Internet of Agents: A vision and a warning
By 2035, it is likely that AI will operate less like isolated tools and more like a distributed network of autonomous agents.
These agents—self-directed AI entities capable of interacting with humans, other agents, and systems—could underpin industries and society itself. For instance, agents might negotiate complex supply chain agreements, autonomously run customer service operations, or even manage city infrastructure.
However, this vision is fraught with risks:
- Expanded Attack Surfaces: Each agent represents a new entry point for exploitation. A single compromised agent could propagate vulnerabilities across the network, much like malware spreads across connected systems today.
- Agent Manipulation: Prompt injection and adversarial inputs will remain significant threats, allowing attackers to subvert agent behavior. By 2035, attackers may use autonomous attack agents to target these vulnerabilities, creating self-perpetuating feedback loops of exploitation.
- Trust and Accountability: In a world where AI agents act with autonomy, determining liability for failures will become increasingly complex. For instance, if an agent managing municipal power grids misinterprets an input and causes a citywide blackout, how do we allocate responsibility?
These risks demand a new paradigm for AI security, one that integrates adaptive, real-time protections.
Three key pillars for securing AI in 2035
Key pillars will define the future of AI security, with anticipated security threats woven into their foundation:
1. Trustworthiness by Design
AI systems must be auditable, explainable, and transparent, especially as agents become more autonomous. Ensuring trustworthiness will be critical in managing interactions where agents operate independently, make decisions, and affect real-world outcomes.
- Example : Autonomous healthcare agents must provide interpretable reasoning behind treatment recommendations to build trust among patients and providers.
- Anticipated Threat : Manipulated training data could lead to biased or unsafe outputs, undermining the agent’s alignment with its intended objectives.
2. Proactive Risk Management
Continuous monitoring and adaptive threat detection will be essential for addressing risks associated with agent interactions. The interconnected nature of the Internet of Agents (IoA) introduces new vulnerabilities where compromised agents could spread malicious behavior across networks.
- Example : An autonomous financial advisor detecting rogue trading agents before they manipulate stock markets.
- Anticipated Threat : Agents communicating and misaligning objectives in unforeseen ways, leading to cascading failures or unintended consequences.
3. Scalable Observability
The IoA will require systems capable of monitoring billions of interactions in real time, ensuring oversight and alignment. Observability will be pivotal in ensuring that agent oversight mechanisms detect and mitigate misaligned or rogue behaviors.
- Example : Urban traffic agents working together to optimize mobility must be observable to allow for human interception when necessary to prevent manipulation or systemic disruptions.
- Anticipated Threat : Rogue agents leveraging hidden instructions or exploiting systemic blind spots to operate undetected.
Twilio Alpha: Exploring what’s next in customer engagement, and building consumer trust today
The Evolution of AI Assistants
By 2035, AI agents will be a foundational part of customer engagement. Customer-facing use-cases are already rapidly expanding and we expect they will continue to do so. In this paradigm, consumer trust is increasingly pivotal to adoption and success. Twilio is preparing its customers for this future through products like AI Assistants, a platform for building Conversational AI Agents, with security and reliability built in from the outset.
Twilio AI Assistants was developed by Twilio Alpha, a program for sharing Twilio’s research and innovation in AI and emerging technologies. This initiative helps our customers stay ahead of the curve as AI agents become an increasingly large part of customer interactions.
Secure Communications
Security in AI is critical to building user trust, and it is foundational to the ways we innovate at Twilio. The three pillars of trustworthiness by design, proactive risk management, and scalable observability are embedded into our processes to ensure that AI systems and communication technologies are secure, thus safeguarding user interactions.
Trustworthiness by Design
Embedding security into every stage of product development is a crucial aspect of our operations. Through our " security by design" approach, we not only anticipate potential vulnerabilities but actively fortify our systems against them from the onset. A cornerstone of this approach is robust access control, ensuring that sensitive data is only accessible to authorized personnel, thus reducing the risk of data breaches.
Proactive Risk Management
The capabilities of LLMs are immense, yet they come with inherent risks. To mitigate these, we apply risk management strategies similar to those used with untrusted web clients, including prompt injection detection, continuous monitoring, and comprehensive security evaluations. These strategies allow us to leverage LLMs' potential while ensuring data safety and system stability.
Scalable Observability
Our team builds with observability as a baseline. We work with technology partners, tools, and vendors in the industry to ensure that we’re creating the foundations for easy and scalable insights as our Conversational AI platforms grow.
Data Privacy and AI Transparency
In addition to those three core pillars, we see data privacy as a non-negotiable – it is a core principle. We implement rigorously enforced data protection measures to safeguard our users’ information. Transparency in AI operations is equally imperative. Twilio Alpha’s AI Nutrition Facts project provides more transparency into how we build AI into our products. By providing detailed insights into how AI models function, we empower users to understand and responsibly interact with AI technologies.
The future of customer engagement with AI
The integration of AI in the customer journey changes how users engage with businesses – AI Assistants are always available, trained on proprietary data, and able to personalize at scale. But none of that is possible without a foundation of trust with the customer. As customer-facing AI evolves, Twilio’s commitment to security and innovation ensures these systems are not only effective but also align with customer values and privacy expectations. This evolution will redefine AI's role in future customer engagement, as secure interactions become a standard expectation, fostering a new era of AI-driven communication.
Each component of our security strategy integrates into a cohesive framework designed to protect our users and technology. In collaboration with technology partners like Lakera, we're exploring secure solutions that redefine what is possible in the realm of AI-driven communication.
Lakera’s vision and mission: Securing today’s GenAI interactions, while preparing for tomorrow
Lakera’s mission is to safeguard the evolving landscape of AI, ensuring that today’s generative AI applications are secure, trustworthy, and adaptable, while laying the foundation for a secure future shaped by autonomous AI agents.
As enterprises deploy conversational agents at scale, they face unprecedented security challenges that traditional methods were never designed to address. These agents need guardrails to ensure they are not manipulated and perform as intended. Compliance with internal, regulatory, and customer risk management guidelines is also required.
Security teams need visibility into GenAI risks now to be ready for broad agent deployments within their organization. Managing risk for the AI agents workforce requires precise controls that don’t slow down developers or applications.
Lakera Guard: Setting the standard for GenAI security
The Lakera AI Security Platform provides real-time guardrails to ensure agents behave as intended. It enables organizations to protect sensitive data, govern interactions, and validate compliance with leading security standards such as the OWASP Top 10. Leading enterprises and fast-growth SaaS companies use Lakera Guard to secure their GenAI applications.
Drawing from Lakera's extensive threat intelligence network—which analyzes over 100,000 daily attacks through real-world GenAI interactions and the Gandalf platform—Lakera Guard identifies and blocks emerging threats and malicious actors in real-time while maintaining the low latency needed for fluid AI interactions.
Equally important is Lakera's ability to provide visibility into enterprise agent interactions. As the saying goes, you can only protect what you see! Lakera delivers real-time insights into GenAI behavior and threats across applications. It monitors every interaction to ensure AI agents behave as intended, helping organizations optimize the user experience while maintaining appropriate guardrails—a critical balance as AI takes on more customer-facing roles.
Looking ahead to the emerging Internet of Agents, Guard is architected to secure complex multi-agent systems with dynamic oversight of tool use and agent-to-agent interactions, so organizations can be ready for broad agent deployments within their systems.
Gandalf: The world’s largest AI Red Team
Lakera’s AI hacking Gandalf game is the world’s largest virtual AI red team and educational platform. Players attempt to hack an AI Gandalf to get him to reveal a secret password he’s been strictly instructed not to disclose.
As GenAI threats are quickly evolving, Gandalf provides a real-time stream of attack data, contributing to Lakera’s proprietary threat data and industry-leading protection. Initiatives such as Gandalf enable organizations to learn how easily agents can be hacked and stay ahead of an ever-evolving threat landscape.
Conclusion
While these security challenges may seem daunting, they also point the way forward. Twilio's AI Assistants platform and Lakera's security solutions demonstrate how trusted AI customer engagement can be achieved through thoughtful design and robust protection. The key lies in building security and trust into the foundation of AI systems, rather than treating them as afterthoughts.
As we look towards AI in 2035 and its increasingly embedded role in customer engagement, it is important to keep a few key things in mind: the critical importance of trustworthiness and transparency in AI to ensure secure and reliable interactions, and the need for proactive risk management that evolves with AI's dynamic nature. These pillars are essential as AI systems become integral to our daily lives, transforming how we engage with technology.
With agentic customer engagement platforms like Twilio AI Assistants and security solutions like Lakera Guard, businesses can not only prepare for the challenges of today but also lay a robust foundation for the future. Lay your own foundation by getting started with Twilio AI Assistants and Lakera Guard today.
Emily Shenfield (eshenfield [at] twilio.com) brings her background in software engineering, teaching, and theater to her role as a Technical Marketing Engineer at Twilio on the Emerging Tech and Innovation team. She's excited about exploring the future of customer engagement and how it impacts developers. Outside of work, she enjoys yelling the answers at the TV during Jeopardy and eating cookies.
Sam Watts leads Product Management at Lakera, the leading GenAI security platform. He has spent the last decade building software and companies at the intersection of security and compliance with deep tech.
Related Posts
Related Resources
Twilio Docs
From APIs to SDKs to sample apps
API reference documentation, SDKs, helper libraries, quickstarts, and tutorials for your language and platform.
Resource Center
The latest ebooks, industry reports, and webinars
Learn from customer engagement experts to improve your own communication.
Ahoy
Twilio's developer community hub
Best practices, code samples, and inspiration to build communications and digital engagement experiences.