Tracking model metrics
So far, we have trained language models and simply analyzed the final results. In Chapter 8, we observed the training process and compared training processes with Hyperparameter Optimization (HPO). In this section, we will briefly discuss how to visually monitor model training with some external tools. To do this, we will take the example that we developed in Chapter 5.
There are several important experiment-tracking frameworks in deep learning, such as MLflow, Neptune, TensorBoard, W&B, CodeCarbon, and ClearML. To keep it simple, we will use TensorBoard and W&B. With the former, we save the training results to a local drive and visualize them at the end of the experiment. With the latter, we monitor the model-training progress live in a cloud platform.
This section will be a short introduction to these tools without going into much detail about them, as this is beyond the scope of this chapter.
Next, let’s start with TensorBoard.