Advanced Generative AI Scenarios
In the previous chapter, we examined in detail how large language models (LLMs) change the attack vectors for poisoning. This is based on the paradigm shift toward external model hosting and access via APIs. However, this is changing, and open source or open-access models are becoming increasingly viable options. This chapter will explore the supply-chain risks third-party LLMs bring, especially with regard to model poisoning and tampering. New fine-tuning techniques, including model merges and model adapters, make these advanced scenarios that we need to understand.
Similarly, the LLM shift has redefined privacy adversarial attacks such as model inversion, influence, and model extraction, making them advanced attack scenarios, too. We will complete our exploration of advanced generative AI (GenAI) scenarios by walking through privacy attacks and LLMs. We will cover the following topics:
- Supply-chain attacks with open-access models
- Privacy...