In order to understand how SVMs work, we have to think about decision boundaries. When we used linear classifiers or decision trees in earlier chapters, our goal was always to minimize the classification error. We did this by assessing the accuracy using mean squared error. An SVM tries to achieve low classification errors too, but it does so only implicitly. An SVM's explicit objective is to maximize the margins between data points of
Understanding linear SVMs
Learning optimal decision boundaries
Let's look at a simple example. Consider some training samples with only two features (x and y values) and a corresponding target label (positive (+) or negative (-)). Since the labels are categorical, we know that this...