Engineering the base model evaluation metric
Engineering a metric for your use case is a skill that is often overlooked. This is most likely because most projects work on a publicly available dataset, which almost always already has a metric proposed. This includes projects on Kaggle and many public datasets people use to benchmark against. However, this does not happen in real life and a metric doesn’t just get served to you. Let’s explore this topic further here and gain this skillset.
The model evaluation metric is the first evaluation method that is essential in supervised projects, excluding unsupervised-based projects. There are a few baseline metrics that exist to be the de facto metrics depending on the problem and target type. Additionally, there are also more customized versions of these baseline metrics that are catered to special objectives. For example, generative-based tasks can be evaluated through a special human-based opinion score called the mean...