Summary
The examples in this chapter illustrated some of the advantages of SVR. The algorithm allows us to adjust hyperparameters to address underfitting or overfitting. This can be done without increasing the number of features. SVR is also less sensitive to outliers than methods such as linear regression.
When we can build a good model with linear SVR, it is a perfectly reasonable choice. It can be trained much faster than a nonlinear model. However, we can often improve performance with a nonlinear SVR, as we saw in the last section of this chapter.
This discussion leads us to what we will explore in the next chapter, where we will look at two popular non-parametric regression algorithms: k-nearest neighbors and decision tree regression. These two algorithms make almost no assumptions about the distribution of our features and targets. Similar to SVR, they can capture complicated relationships in the data without increasing the feature space.