Summary
This chapter covered poisoning attacks on typical LLM applications in which we have no control over the model in detail. We focused on attacks on RAG embeddings and fine-tuning as the two attack vectors for poisoning in LLM applications, regardless of model hosting.
In the next chapter, we will look at poisoning as part of supply-chain challenges in LLM and other advanced LLM adversarial attacks.