Monotonic Constraints and Model Tuning for Interpretability
Most model classes have hyperparameters that can be tuned for faster execution speed, increasing predictive performance, and reducing overfitting. One way of reducing overfitting is by introducing regularization into the model training. In Chapter 3, Interpretation Challenges, we called regularization a remedial interpretability property, which reduces complexity with a penalty or limitation that forces the model to learn sparser representations of the inputs. Regularized models generalize better, which is why it is highly recommended to tune models with regularization to avoid overfitting to the training data. As a side effect, regularized models tend to have fewer features and interactions, making the model easier to interpret—less noise means a clearer signal!
And even though there are many hyperparameters, we will only focus on those that improve interpretability by controlling overfitting. Also, to a certain...