By now, you will have acquired a fair understanding of adversarial machine learning, and how to attack machine learning models. It's time to dive deep into more technical details, learning how to bypass machine learning based intrusion detection systems with Python. You will also learn how to defend against those attacks.
In this demonstration, you are going to learn how to attack the model with a poisoning attack. As discussed previously, we are going to inject malicious data, so that we can influence the learning outcome of the model. The following diagram illustrates how the poisoning attack will occur:
In this attack, we are going to use a Jacobian-Based Saliency Map Attack (JSMA). This is done by searching for adversarial examples by modifying only a limited number of pixels in an input.
Let's...