Let's consider a dataset of feature vectors we want to classify:
![](https://static.packt-cdn.com/products/9781789347999/graphics/assets/1b661464-66fe-438f-90bd-f32273f6e9b0.png)
For simplicity, we assume we are working with a bipolar classification (in all the other cases, it's possible to automatically use the one-versus-all strategy) and we set our class labels as -1 and 1:
![](https://static.packt-cdn.com/products/9781789347999/graphics/assets/90a19a90-b27e-4044-b63c-afd1e68866a1.png)
Our goal is to find the best separating hyperplane, for which the equation is as follows:
![](https://static.packt-cdn.com/products/9781789347999/graphics/assets/4388b69a-b66d-409a-af4d-a3fe3eabbc3a.png)
In the following graph, there's a bidimensional representation of such a hyperplane:
![](https://static.packt-cdn.com/products/9781789347999/graphics/assets/ed231c40-e404-4ae3-942e-6a15939b4491.png)
In this way, our classifier can be written as follows:
![](https://static.packt-cdn.com/products/9781789347999/graphics/assets/1ed7424a-148f-4d72-8d6e-181e33f03548.png)
In a realistic scenario, the two classes are normally separated by a margin with two boundaries where a few elements lie. Those elements are called support vectors and the algorithm's name derives from their peculiar role. For a more generic mathematical expression, it's preferable to renormalize our dataset...