Let's consider a dataset of feature vectors we want to classify:

For simplicity, we assume we are working with a bipolar classification (in all the other cases, it's possible to automatically use the one-versus-all strategy) and we set our class labels as -1 and 1:

Our goal is to find the best separating hyperplane, for which the equation is as follows:

In the following graph, there's a bidimensional representation of such a hyperplane:

In this way, our classifier can be written as follows:

In a realistic scenario, the two classes are normally separated by a margin with two boundaries where a few elements lie. Those elements are called support vectors and the algorithm's name derives from their peculiar role. For a more generic mathematical expression, it's preferable to renormalize our dataset...