Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Machine Learning for Penetration Testing

You're reading from   Mastering Machine Learning for Penetration Testing Develop an extensive skill set to break self-learning systems using Python

Arrow left icon
Product type Paperback
Published in Jun 2018
Publisher Packt
ISBN-13 9781788997409
Length 276 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Chiheb Chebbi Chiheb Chebbi
Author Profile Icon Chiheb Chebbi
Chiheb Chebbi
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Introduction to Machine Learning in Pentesting 2. Phishing Domain Detection FREE CHAPTER 3. Malware Detection with API Calls and PE Headers 4. Malware Detection with Deep Learning 5. Botnet Detection with Machine Learning 6. Machine Learning in Anomaly Detection Systems 7. Detecting Advanced Persistent Threats 8. Evading Intrusion Detection Systems 9. Bypassing Machine Learning Malware Detectors 10. Best Practices for Machine Learning and Feature Engineering 11. Assessments 12. Other Books You May Enjoy

Chapter 8 – Evading Intrusion Detection Systems with Adversarial Machine Learning

  1. Can you briefly explain why overtraining a machine learning model is not a
    good idea?

By overtraining a machine learning model by training data too well, we train the model in a way that negatively impacts the performance of the model on new data. It is also referred to as overfitting.

  1. What is the difference between overfitting and underfitting?

Overfitting refers to overtraining the model, while underfitting refers to a model that can neither model the training data nor generalize to new data.

  1. What is the difference between an evasion and poisoning attack?

In an evasion adversarial attack, the attacker try many different samples to identify a learning pattern to bypass it; while in poisoning attacks, the attacker poisons the model in the training phase.

  1. How does adversarial clustering...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime