Plotting calibration curves for a model trained on a real-world dataset
Model calibration should ideally be done on a dataset that is separate from the training and test set. Why? It’s to avoid overfitting because the model can become too tailored to the training/test set’s unique characteristics.
We can have a hold-out dataset that has been specifically set aside for model calibration. In some cases, we may have too little data to justify splitting it further into a separate hold-out dataset for calibration. In such cases, a practical compromise might be to use the test set for calibration, assuming that the test set has the same distribution as the dataset on which the model will be used to make final predictions. However, we should keep in mind that after calibrating on the test set, we no longer have an unbiased estimate of the final performance of the model, and we need to be cautious about interpreting the model’s performance metrics.
We use the HR...