Regression with ElasticNetCV
Elastic net regularization is a method that reduces the danger of overfitting in the context of regression (see http://en.wikipedia.org/wiki/Elastic_net_regularization). The elastic net regularization linearly combines the least absolute shrinkage and selection operator (LASSO) and ridge methods. LASSO limits the so-called L1 norm, or Manhattan distance. This norm measures the sum of the difference between the absolute coordinates of a pair of points. The ridge method uses a penalty, which is the L1 norm squared. For regression problems, the goodness of fit is often determined using the coefficient of determination , also called R squared (see http://en.wikipedia.org/wiki/Coefficient_of_determination). Unfortunately, there are several definitions of R squared. Also, the name is a bit misleading, since negative values are possible. A perfect fit would have a coefficient of determination of 1. Since the definitions allow for a wide range of acceptable values,...