Summary
In this chapter, we explored how adversarial inputs can be used in direct and indirect prompt injection attacks to jailbreak the model and how RAG can be exploited for indirect prompt injection.
We covered implications on LLM-integrated systems, as well as the risk for data exfiltration, privilege escalation, and RCE.
Finally, we discussed how to address these risks and safeguard LLM applications using a comprehensive defense-in-depth strategy.
The next chapter will revisit model poisoning and how that changes with LLMs.