Summary
In this chapter, we dived deeper into the important concept of objective metrics in machine learning. We covered, in detail, many of the most popular metrics that are used for evaluating binary classification models, such as precision, recall, F1 score, and ROC AUC. We then moved on to discuss hyperparameter optimization, including some of the important theoretical information in this area, such as the different types of methods that can be used to search for the optimal set of hyperparameters and associated values. This also provided some insight into why it can be very difficult or even impossible to efficiently perform hyperparameter tuning manually due to the large number of trials that can be required.
Next, we dived into the Google Cloud Vertex AI Vizier service, which can be used to automate the hyperparameter tuning process for us. We then performed hands-on activities in Jupyter Notebook on Vertex AI, and we used Vizier to automatically find the best set of hyperparameters...