Optimizing training for image segmentation
In the previous recipe, we saw how we could leverage MXNet and Gluon to optimize the training of our models with a variety of different techniques. We understood how we can jointly use lazy evaluation and automatic parallelization for parallel processing. We saw how to improve the performance of our DataLoaders by combining preprocessing in the CPU and GPU, and how using half-precision (Float16
) in combination with AMP can halve our training times. Lastly, we explored how to take advantage of multiple GPUs to further reduce training times.
Now, we can revisit a problem we have been working with throughout the book: image segmentation. We have worked on this task in recipes from previous chapters. In the Segmenting objects semantically with MXNet Model Zoo – PSPNet and DeepLabv3 recipe in Chapter 5, we learned how to use pre-trained models from GluonCV Model Zoo, and introduced the task and the datasets that we will be using in this...