Pursuing a data-related career requires a tendency to deal with imperfections. Dealing with missing values is one step that we cannot progress without. So, we started this chapter by learning about different data imputation methods. Additionally, suitable data for one task may not be perfect for another. That's why we learned about feature encoding and how to change categorical and ordinal data to fit into our machine learning needs. Helping algorithms to perform better can require rescaling the numerical features. Therefore, we learned about three scaling methods. Finally, data abundance can be a curse on our models, so feature selection is one prescribed way to deal with the curse of dimensionality, along with regularization.
One main theme that ran through this entire chapter is the trade-off between simple and quick methods versus more informed and computationally expensive methods that may result in overfitting. Knowing which methods to use requires an...