Modeling expectations
So far, you have learned about model building, validation, and management. You can now complete the foundations of ML by learning about a couple of other expectations while modeling.
The first one is parsimony. Parsimony describes models that offer the simplest explanation and fit the best results when compared with other models. Here’s an example: while creating a linear regression model, you realize that adding 10 more features will improve your model’s performance by 0.001%. In this scenario, you should consider whether this performance improvement is worth the cost of parsimony (since your model will become more complex). Sometimes it is worth it, but most of the time it is not. You need to be skeptical and think according to your business case.
Parsimony directly supports interpretability. The simpler your model is, the easier it is to explain it. However, there is a battle between interpretability and predictivity: if you focus on predictive power, you are likely to lose some interpretability. Again, you must select what is the best situation for your use case.