Revealing the Secret of Deep Learning Models
So far, we have described how to construct and efficiently train a deep learning (DL) model. However, model training often involves multiple iterations because only rough guidance on how to configure the training correctly for a given task exists.
In this chapter, we will introduce hyperparameter tuning, the most standard process of finding the right training configuration. As we guide you through the steps of hyperparameter tuning, we will introduce popular search algorithms adopted for the tuning process (grid search, random search, and Bayesian optimization). We will also look into the field of Explainable AI, which is the process of understanding what models do during prediction. We will describe the three most common techniques in this domain: Permutation Feature Importance (PFI), SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME).
In this chapter, we’re going to cover the following...