Evaluating and certifying adversarial robustness
It's necessary to test your systems in any engineering endeavor to see how vulnerable they are to attacks or accidental failures. However, security is a domain where you must stress-test your system to ascertain what level of attack is needed to make your system break down beyond an acceptable threshold. Furthermore, figuring out what level of defense is needed to curtail an attack is useful information too.
Comparing model robustness with attack strength
We now have two classifiers we can compare against an equally strengthed attack, and we can try different attack strengths to see how they fare across all of them. We will use FGSM because it's fast, but you could use any method!
The first attack strength we can assess is no attack strength. In other words, what is the classification accuracy against the test dataset with no attack? We had already stored the predicted labels for both the base (y_test_pred
) and robust...