Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning Algorithms

You're reading from   Machine Learning Algorithms A reference guide to popular algorithms for data science and machine learning

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781785889622
Length 360 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Giuseppe Bonaccorso Giuseppe Bonaccorso
Author Profile Icon Giuseppe Bonaccorso
Giuseppe Bonaccorso
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. A Gentle Introduction to Machine Learning FREE CHAPTER 2. Important Elements in Machine Learning 3. Feature Selection and Feature Engineering 4. Linear Regression 5. Logistic Regression 6. Naive Bayes 7. Support Vector Machines 8. Decision Trees and Ensemble Learning 9. Clustering Fundamentals 10. Hierarchical Clustering 11. Introduction to Recommendation Systems 12. Introduction to Natural Language Processing 13. Topic Modeling and Sentiment Analysis in NLP 14. A Brief Introduction to Deep Learning and TensorFlow 15. Creating a Machine Learning Architecture

Linear support vector machines


Let's consider a dataset of feature vectors we want to classify:

For simplicity, we assume it as a binary classification (in all the other cases, it's possible to use automatically the one-versus-all strategy) and we set our class labels as -1 and 1:

Our goal is to find the best separating hyperplane, for which the equation is:

In the following figure, there's a bidimensional representation of such a hyperplane:

In this way, our classifier can be written as:

In a realistic scenario, the two classes are normally separated by a margin with two boundaries where a few elements lie. Those elements are called support vectors. For a more generic mathematical expression, it's preferable to renormalize our dataset so that the support vectors will lie on two hyperplanes with equations:

In the following figure, there's an example with two support vectors. The dashed line is the original separating hyperplane:

Our goal is to maximize the distance between these two boundary hyperplanes...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image