MetaCost – making any classification model cost-sensitive
MetaCost was first introduced in a paper by Pedro Domingos [7] in 1999. MetaCost acts as a wrapper around machine learning algorithms that converts the underlying algorithm into a cost-sensitive version of itself. It treats the underlying algorithm as a black box and works best with unstable algorithms (defined below). When MetaCost was first proposed, CSL was in its early stages. Only a few algorithms, such as decision trees, had been converted into their cost-sensitive versions. For some models, creating a cost-sensitive version turned out to be easy while for others it was a non-trivial task. For algorithms where defining cost-sensitive versions of the model turned out to be difficult, people mostly relied upon data sampling techniques such as oversampling or undersampling. This was when Domingos came up with an approach for converting a large range of algorithms into their cost-sensitive versions. MetaCost can work...