Summary
Having arrived at the end of the chapter, we will summarize the advice we have discussed along the way so you can organize your validation strategy and reach the end of a competition with a few suitable models to submit.
In this chapter, we first analyzed the dynamics of the public leaderboard, exploring problems such as adaptive overfitting and shake-ups. We then discussed the importance of validation in a data science competition, building a reliable system, tuning it to the leaderboard, and then keeping track of your efforts.
Having discussed the various validation strategies, we also saw the best way of tuning your hyperparameters and checking your test data or validation partitions by using adversarial validation. We concluded by discussing some of the various leakages that have been experienced in Kaggle competitions and we provided advice about how to deal with them.
Here are our closing suggestions:
- Always spend the first part of the competition...