The study of adversarial attacks on neural networks has revealed a surprising sensitivity to adversarial perturbations. Even the most accurate of neural networks, when left undefended, has been shown to be vulnerable to single pixel attacks and the peppering of invisible-to-the-human-eye noise. Fortunately, recent advances in the field have offered solutions on how to harden neural networks to adversarial attacks of all sorts. One such solution is a neural network design called Analysis by Synthesis (ABS). The main idea behind the model is that it is a Bayesian model. Rather than directly predicting the label given the input, the model also learns class-conditional, sample distributions using variational autoencoders (VAEs). More information can be found in https://arxiv.org/abs/1805.09190.
In this recipe, you will load...