Support vector regression
As mentioned before, support vector machines can be used for regression. In the case of regression, we are using a hyperplane not to separate points, but for a fit. A learning curve is a way of visualizing the behavior of a learning algorithm. It is a plot of training and test scores for a range of train data sizes. Creating a learning curve forces us to train the estimator multiple times and is, therefore, on aggregate, slow. We can compensate for this by creating multiple concurrent estimator jobs. Support vector regression is one of the algorithms that may require scaling. If we do this, then we get the following top scores:
Max test score Rain 0.0161004084576 Max test score Boston 0.662188537037
This is similar to the results obtained with the ElasticNetCV
class. Many scikit-learn classes have an n_jobs
parameter for that purpose. As a rule of thumb, we often create as many jobs as there are CPUs in our system. The jobs are created using the standard Python...