Summary
In this chapter, you learned a few important aspects of model performance improvement techniques. We started with Bias-Variance Trade-off and understood it impacts a model's performance. We now know that high bias will result in underfitting, whereas high variance will result in overfitting of models, and that achieving one comes at the expense of the other. Therefore, in order to build the best models, we need to strike the ideal balance between bias and variance in machine learning models.
Next, we explored various types of cross-validation techniques in R that provide ready-to-use functions to implement the same. We studied holdout, k-fold, and hold-one-out validation approaches to cross-validation and understood how we can perform robust assessment of performance of machine learning models. We then studied hyperparameter tuning and explored grid search optimization, random search optimization, and Bayesian optimization techniques in detail. Hyperparameter tuning of machine learning...