Chapter 12: Monotonic Constraints and Model Tuning for Interpretability
Most model classes have hyperparameters that can be tuned for faster execution speed, increasing predictive performance and reducing overfitting. One way of reducing overfitting is by introducing regularization into the model training. In Chapter 3, Interpretation Challenges, we called regularization a remedial interpretability property, which reduces complexity with a penalty or limitation that forces the model to learn sparser representations of the inputs. Regularized models generalize better, which is why it is highly recommended to tune models with this strategy. As a side effect, fewer features and their interactions are essential to the regularized model, making the model easier to interpret—less noise means a clearer signal!
And even though there are many hyperparameters, we will only focus on those that improve interpretability by controlling overfitting. Also, to a certain extent, we will revisit...