Tracking experiments with MLflow
In real life, building a single model is never sufficient. A typical model-building process requires iterating over the process several times, sometimes changing the model parameters and other times tweaking the training dataset, until the desired level of model accuracy is achieved. Sometimes, a model that's suitable for a certain use case might not be useful for another. This means that a typical data science process involves experimenting with several models to solve a single business problem and keeping track of all the datasets, model parameters, and model metrics for future reference. Traditionally, experiment tracking is done using rudimentary tools such as spreadsheets, but this slows down the time to production and is also a tedious process that's prone to mistakes.
The MLflow Tracking component solves this problem with its API and UI for logging ML experiments, including model parameters, model code, metrics, the output of the...