Yes, a well-calibrated model can have low accuracy and vice-versa. Let’s take a dumb model that always outputs 0.1 probability for any input example. This model is perfectly calibrated, but its accuracy is only 90%, which is quite low for an imbalanced dataset with a 1:9 imbalance ratio. Here is the implementation of such a model:from sklearn.datasets import make_classification
from sklearn.calibration import calibration_curve
import matplotlib.pyplot as plt
import numpy as np
# Make an imbalanced binary classification dataset
y = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, \
1, 0, 0, 0, 0, 0, 0])
# Dummy model always predicts not-1 (i.e., 0) with full confidence
y_pred = np.array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,\
0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,\
0.1, 0.1, 0.1, 0.1, 0.1, 0.1])
y_pred_labels = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, \
&...