Introducing MFO
MFO is a group of hyperparameter tuning methods that work by creating a cheap approximation of the whole hyperparameter tuning pipeline so that we can get similar performance results with much lower computational cost and faster experiment time. There are many ways to create a cheap approximation. For example, we can work only on the subsets of the full data in the first several steps rather than directly working on the full data, or we can also try to use fewer epochs when training a neural-network-based model before training our model with full epochs. In other words, MFO methods work by combining cheap low-fidelity and expensive high-fidelity evaluations, where usually the proportion of cheaper evaluations is much larger than the more expensive evaluations so that we can achieve lower computational cost and thus faster experiment time. However, MFO methods can also be categorized as part of the informed search category since they utilize knowledge from previous iterations...