RCE with prompt injection
RCE through prompt injection in LLMs presents a significant security concern, particularly in environments with LLM integrations. This risk manifests in two primary ways:
- Client-side rendering of insecure output: One attack vector performs client-side rendering to reduce insecure outputs such as XSS and JavaScript. If the output from the LLM is not escaped correctly, it can lead to malicious code being executed on the client side. This vulnerability often arises when an LLM generates content directly rendered in a user interface without sufficient sanitization.
- Integration vulnerabilities: The second major attack surface is integrating LLMs with downstream services and plugins. If these components evaluate code without proper validation, they become vulnerable to RCE. For instance, in the case of LangChain vulnerabilities, the
llm_math
chain used Pythoneval
andexec
, enabling simple RCE through the Python interpreter. The chain acted as an intermediary...