Cybersecurity

The impact of prompt injection in LLM agents – Help Net Security

Prompt injection is, thus far, an unresolved challenge that poses a significant threat to Language Model (LLM) integrity. This risk is particularly alarming when LLMs are turned into agents that interact directly with the external world, utilizing tools to fetch data or execute actions. Malicious actors can leverage prompt injection techniques to generate unintended and potentially harmful outcomes by distorting the reality in which the LLM operates.

LLM-powered agents

This is why safeguarding the integrity of these systems and the agents they power demands meticulous attention to confidentiality levels, sensitivity, and access controls associated with the tools and data accessed by LLMs.

LLMs have gotten widespread attention due to their unprecedented ability to comprehend natural language, generate coherent text, and undertake various complex tasks such as summarization, rephrasing, sentiment analysis, and translation.

What really differentiates LLMs is their capacity to exhibit “emergent abilities”, going beyond pre-programmed responses by drawing insights from the extensive datasets they are trained on, and the nuanced ways they interact with users. They can increasingly approximate some aspects of human reasoning and answer questions accordingly.

Building upon these emergent abilities is the potential development of LLM-powered agents that can actively engage with the external world, as laid out in two groundbreaking research papers – Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (CoT) and ReAct – Synergizing Reasoning and Acting in Language Models.

CoT introduces a technique to enhance the reasoning capabilities of LLMs by prompting them to think in intermediate steps. ReAct goes further by allowing LLMs access to “tools” for interaction with the external world. These frameworks provide a blueprint for crafting powerful agents capable of seamlessly interfacing with diverse external systems for complex tasks. The LangChain open-source project has popularized and made these techniques accessible to the developer community.

However, the road to implementing LLM agents, particularly those interfacing with external tools and systems, is not without challenges. Despite their remarkable capabilities, these agents may grapple with utilizing tools appropriately and adhering to specified policies, which would make – for the time being – their practical adoption in production environments unfeasible.

It’s quite reasonable to assume that these challenges will be overcome, and using LLM-powered agents for production will become much more effective and reliable. This is where a convergence of opportunity and danger will become more evident.

Opportunities and dangers of LLM adoption in production

As organizations move closer to adopting and integrating LLM-powered agents into real-world scenarios, there is the threat of attackers being able to transform an agent into a confused deputy via prompt injection and “jailbreak” techniques.

Prompt injection is a concept analogous to injection attacks in traditional systems, with SQL injection being a notable example. In the case of LLMs, prompt injection occurs when attackers craft inputs to manipulate LLM responses, aligning them with their objectives rather than the intended system or user intent.

The impact of prompt injection varies based on the deployment context. In isolated environments with limited external access, the effects may be negligible. However, even minor prompt injections can lead to significant consequences when integrated into broader systems with tool access.

Imagine the scenario of an LLM agent that acts as an order assistant on an e-commerce website. Users can chat with the agent and ask for information about their orders; moreover, the agent is given tools that allow it to issue a refund for an order under certain conditions. Prompt injection can be used in this scenario to force the agent to misuse its tools, for example, revealing orders belonging to different users or even issuing a refund for non-eligible orders.

A multi-faceted approach to reduce the potential fallout from prompt injections

Addressing prompt injection in LLMs presents a distinct set of challenges compared to traditional vulnerabilities like SQL injections. In these types of scenarios, the structured nature of the language allows for parsing and interpretation into a syntax tree, making it possible to differentiate between the core query (code) and user-provided data, and enabling solutions like parameterized queries to handle user input safely.

In contrast, LLMs operate on natural language, where everything is essentially user input with no parsing into syntax trees or clear separation of instructions from data. This absence of a structured format makes LLMs inherently susceptible to injection, as they cannot easily discern between legitimate prompts and malicious inputs.

Any defensive and mitigation strategies should be created with the assumption that attackers will eventually be able to inject prompts successfully.

Firstly, enforcing stringent privilege controls ensures LLMs can access only the essentials, minimizing potential breach points. We should also incorporate human oversight for critical operations to add a layer of validation to safeguard against unintended LLM actions (typically referred to as human-in-the-loop).

In addition, we should adopt solutions such as OpenAI Chat Markup Language (ChatML) that attempt to segregate genuine user prompts from other content. While not perfect, they lessen the influence of external or manipulated inputs.

By setting clear trust boundaries, we treat the LLMs as untrusted, always maintaining external control in decision-making and being vigilant of potentially untrustworthy LLM responses.

Enforcing stringent trust boundaries is essential when LLMs are given access to tools. It is essential to ensure that the tools accessed by LLMs align with the same or lower confidentiality level and that the users of these systems possess the required access rights to any information the LLM might be able to access.

In practice, this requires restricting and carefully defining the scope of external tools and data sources that an LLM can access. Tools should be designed to minimize trust in LLMs’ input, validate their data rigorously, and limit the degree of freedom they provide to the agent.

As we continue to tread the exciting path of LLM-powered agents, ensuring their security and reliability becomes paramount. The key lies not just in amplifying the abilities of these agents but in defining boundaries and checks that safeguard their operations against manipulation, which is currently inevitable. The future of LLMs is promising, but only if approached with a balance of enthusiasm and caution.