Supply Chain Attacks and Adversarial AI
In the previous chapter, we looked at adversarial AI poisoning attacks, which tamper with training data so that they can compromise the model’s output at inference time. We looked at how an attacker could mislabel samples, inject perturbations to create backdoors that can be triggered at inference time, or inject subtle perturbations without changing labels or being detected.
We assumed that these would happen in our environment, but these attacks will not just occur in our data science environment in an increasingly interconnected digital landscape.
Supply chain risks are a critical concern regarding staging poisoning attacks and adversarial AI in general. While supply chain vulnerabilities in software development have long been recognized, the rise of AI introduces a new dimension of risks – mainly through its reliance on live data and pre-trained models. This chapter aims to explore the complex relationship between these...