Further reading
- Polyakov, A., 2019, Aug 6, How to attack Machine Learning (Evasion, Poisoning, Inference, Trojans, Backdoors) [blog post]: https://towardsdatascience.com/how-to-attack-machine-learning-evasion-poisoning-inference-trojans-backdoors-a7cb5832595c
- Carlini, N., & Wagner, D., 2017, Towards Evaluating the Robustness of Neural Networks. 2017 IEEE Symposium on Security and Privacy (SP), 39–57: https://arxiv.org/abs/1608.04644
- Brown, T., Mané, D., Roy, A., Abadi, M., and Gilmer, J., 2017, Adversarial Patch. ArXiv: https://arxiv.org/abs/1712.09665
Learn more on Discord
To join the Discord community for this book – where you can share feedback, ask the author questions, and learn about new releases – follow the QR code below:
https://packt.link/inml