Cached at:
04/20/26, 02:51 PM
# Designing AI agents to resist prompt injection
Source: [https://openai.com/index/designing-agents-to-resist-prompt-injection/](https://openai.com/index/designing-agents-to-resist-prompt-injection/)
AI agents are increasingly able to browse the web, retrieve information, and take actions on a user’s behalf\. Those capabilities are useful, but they also create new ways for attackers to try to manipulate the system\.
These attacks are often described as[prompt injection](https://openai.com/index/prompt-injections/): instructions placed in external content in an attempt to make the model do something the user did not ask for\. In our experience, the most effective real\-world versions of these attacks increasingly resemble social engineering more than simple prompt overrides\.
That shift matters\. If the problem is not just identifying a malicious string, but resisting misleading or manipulative content in context, then defending against it cannot rely only on filtering inputs\. It also requires designing the system so that the impact of manipulation is constrained, even if some attacks succeed\.
Early “prompt injection” type attacks could be as simple as editing a Wikipedia article to include direct instructions to AI agents visiting it; without training\-time experience of such an adversarial environment AI models would often follow those instructions without question[1](https://openai.com/index/designing-agents-to-resist-prompt-injection/#citation-bottom-1)\. As models have become smarter, they’ve also become less vulnerable to this kind of suggestion and we’ve observed that prompt injection\-style attacks have responded by including elements of social engineering:
Within the wider AI security ecosystem it has become common to recommend techniques such as “AI firewalling” in which an intermediary between the AI agent and the outside world attempts to classify inputs into malicious prompt injection and regular inputs—but these fully developed attacks are not usually caught by such systems\. For such systems, detecting a malicious input becomes the same very difficult problem as detecting a lie or misinformation, and often without necessary context\.
As real\-world prompt injection attacks developed in complexity, we found that the most effective offensive techniques leveraged social engineering tactics\. Rather than treating these prompt injection attacks with social engineering as a separate or entirely new class of problem, we began to view it through the same lens used to manage social engineering risk on human beings in other domains\. In these systems, the goal is not limited to perfectly identifying malicious inputs, but to design agents and systems so that the impact of manipulation is constrained, even if it succeeds\. Such systems show themselves to be effective at mitigating both prompt injection and social engineering\.
In this way, we can imagine the AI agent as existing in a similar three\-actor system as a customer service agent; the agent wants to act on behalf of their employer, but they are continuously exposed to external input that may attempt to mislead them\. The customer support agent, human or AI, must have limitations placed on their capabilities to limit the downside risk inherent to existing in such a malicious environment\.
Imagine a circumstance in which a human being operates a customer support system and is able to give out gift cards and refunds for inconveniences experienced by the customer such as slowness of delivery, damages as a result of malfunction, etc\. This is a multi\-party problem in which the corporation must trust that the agent gives refunds out for the right reasons, while the agent also interacts with third\-parties who may aim to mislead them or even place them under duress\.
In the real world, the agent is given a set of rules to follow, but it is expected that, in the adversarial environment they exist in, they will be misled\. Perhaps a customer sends a message claiming that their refund never went through, or threatens harm if not given a refund\. Deterministic systems the agent interacts with limit the amount of refunds that can be given to a customer, flag up potential phishing emails, and provide other such mitigations to limit the impact of compromising an individual agent\.
This mindset has informed a robust suite of countermeasures we have deployed that uphold the security expectations of our users\.
In ChatGPT, we combine this social engineering model with more traditional security engineering approaches such as source\-sink analysis\.
In that framing, an attacker needs both a source, or a way to influence the system, and a sink, or a capability that becomes dangerous in the wrong context\. For agentic systems, that often means combining untrusted external content with an action such as transmitting information to a third party, following a link, or interacting with a tool\.
Our goal is to preserve a core security expectation for users: potentially dangerous actions, or transmissions of potentially sensitive information, should not happen silently or without appropriate safeguards\.
Attacks we see developed against ChatGPT most often consist of attempting to convince the assistant it should take some secret information from a conversation and transmit it to a malicious third\-party\. In most of the cases we’re aware of, these attacks fail because our safety training causes the agent to refuse\. For those cases in which the agent is convinced, we have developed a mitigation strategy called*Safe Url*which is designed to detect when information the assistant learned in the conversation would be transmitted to a third\-party\. In these rare cases we either show the user the information that would be transmitted and ask them to confirm, or we block it and tell the agent to try another way of moving forward with the user’s request\.
Safe interaction with the adversarial outside world is necessary for fully autonomous agents\. When integrating an AI model with an application system, we recommend asking what controls a human agent should have in a similar situation and implementing those\. We expect that a maximally intelligent AI model will be able to resist social engineering better than a human agent, but this is not always feasible or cost\-effective depending on the application\.
We continue to explore the implications of social engineering against AI models and defenses against it and incorporate our findings both into our application security architectures and the training we put our AI models through\.