This chapter provided details on how to build a training pipeline using TF 2.0 tf.keras APIs and how to view build, compile, and fit a model using various available loss functions, optimizers, and hyperparameters in a distributed manner on GPUs using a distribution strategy. It also detailed out how to save, restore your model at training time for future training, and inference. With TensorBoard being one of the major strengths of TF 2.0, we provided details about how to efficiently use it to monitor training performance for loss and accuracy and to how debug and profile it.
In the next chapter, we will learn about model inference pipelines and deploy them on multi-platforms.