Directly optimizing the metric
The loss and the metric used to train a deep learning model are two separate components. One of the tricks you can use to improve a model’s accuracy performance against the chosen metric is to directly optimize against it instead of just monitoring performance for the purpose of choosing the best performing model weights and using early stopping. In other words, using the metric as a loss directly!
By directly optimizing for the metric of interest, the model has a chance to improve in a way that is relevant to the end goal rather than optimizing for a proxy loss function that may not be directly related to the ultimate performance of the model. This simply means that the model can result in a much better performance when using the metric as a loss directly.
However, not all metrics can be used as a loss, as not all metrics can be differentiable. Remember that backpropagation requires all functions used to be differentiable so that gradients...