Feature scaling
When you are working with a large spread of numbers, the higher the deviation, the harder it will be to train a good model on them. This issue with deviation is for a number of reasons we won't cover now, but we'll cover scaling techniques more in depth in the Scaling the data section in Chapter 9, Building a Regression Model with scikit-learn. But you should know that sometimes you will come across datasets where someone has already scaled the data.
You can't always know where a dataset has come from, so you may not have the benefit of understanding why a particular decision was made.
This data could come from a colleague, a Kaggle competition, or it is just an example dataset included in scikit-learn
, like the one we are using now. This is the same California training dataset that was used in Chapter 2, Analyzing Open Source Software, and we'll assume that you already have the y_test
and y_predict
setup. If not, refer back to Chapter 2,...