Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models

Save for later
  • 5 min read
  • 04 Dec 2018

article-image

The NeurIPS Conference 2018 being held in Montreal, Canada this week from 2nd December to 8th December will feature a series of tutorials, releases and announcements. The conference, previously known as NIPS, underwent a re-branding of its name (after much debate) as some members of the community found the acronym as “sexist”, pointing out that it is offensive towards women.

“The Adversarial Robustness: Theory and Practice” is a tutorial that was presented at NIPS 2018 yesterday. The tutorial was delivered by J. Zico Kolter, a professor at Carnegie Mellon and chief Scientist of Bosch AI center and Aleksander Madry from MIT.
In this tutorial, they explored the importance of building adversarially robust machine learning models as well as the challenges one will encounter while deploying the same.  Adversarial Robustness is a library dedicated to adversarial machine learning. It allows rapid crafting and analysis of attacks and defense methods for machine learning models.

Alexander opened the talk by highlighting some of the challenges faced while deploying Machine Learning in the real world.  Even though machine learning has had a success story so far, is ML truly ready for real- world deployment? Also, can we truly rely on machine learning? These questions arise because developers don’t fully understand how machine learning interacts with other parts of the system, and this can lead to plenty of adversaries. Safety is still very much an issue while deploying ML.

The tutorial tackles questions related to adversarial robustness and gives plenty of examples for developers to understand the concept and deploy ML models that are more adversarially robust.

The measure of machine learning performance is the fraction of mistakes made during the testing phase of the algorithm. However, Alexander explains that in reality, the distributions we use machine learning on are NOT the ones we train it on. These assumptions are sometimes misleading. The key implication is that machine learning predictions are most of the time accurate, but they can also can turn out to be brittle. For example, the slightest of noise can alter an output and make the wrong prediction which accounts for brittleness of the ML algorithms. Besides, rotation and translation can fool state of the art vision models.

Brittleness and other issues in Machine Learning


Brittleness hampers the following domains in machine learning:

Security: When a machine learning system has loopholes, a hacker can manipulate it leading to a system/data breach. An example of this would be, adding external entities to manipulate an Object recognition system.
Safety and Reliability: Alexander gives an example of Tesla's self-driving cars, where the AI sometimes drives the car over a divider and the driver has to take over. In addition, the system does not report this as an error.

ML alignment: Developers need to understand the “failure modes” of machine learning to understand how they work and succeed.

Adversarial issues occur in the inference models. The training phases also involves a risk called ‘Data poisoning’. The goal of Data poisoning is maintaining training accuracy but hampering generalization. Machine learning is always in need of a huge amount of data to function and train on. To fulfill this need, sometimes the system works on data that cannot be trusted. This occurs mostly in classic Machine learning scenarios and less in Deep Learning.

In deep learning, data poisoning causes training accuracy but hampers classification of specific inputs. It can also plant an undetectable backdoor in the system that can give it almost total control over the model. The final issue arises in Deployment.  During the deployment stage, restricted access is given to a user- for example, just access to the input-output of a model can also lead to Black box attacks.

Alexander’s Commandments of Secure/Safe ML


1. Do not train on data you do not trust
2. Do not let anyone use the model or observe its outputs unless you completely trust them
3. Do not fully trust the predictions of your model (because of adversarial examples)

Developers need to re-think the tools they use in machine learning to understand if they are robust enough to stress test the system.

For Alexander, we need to treat training as an optimization problem. The aim is to find parameters that minimize loss on the training sample.

Zico then builds on the principles put forward by Alexander, showing a number of different adversarial examples in action. This includes something called convex relaxations, which help to train and find the most optimal models for a given training set.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime

Takeaways from the Tutorial


After understanding how to implement adversarily robust ML models, developers can now ask themselves how does adversarial robust ML differ from standard ML.  That being said, adversarial robustness comes at a cost. Optimization during training is difficult and models need to be larger. More training data might be required. We also might need to lose on standard measures of performance. However, adversarial robustness helps machine learning models become semantically meaningful.

Head over to NeurIPS facebook page for the entire tutorial and other sessions happening at the conference this week.

Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more!

Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!

Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019