Understanding the mixed precision strategy
The benefits of using lower-precision formats are crystal clear. Besides saving memory, the computing power required to handle data with lower precision is less than that needed to process numbers with higher precision.
One approach to accelerate the training process of machine learning models concerns employing a mixed precision strategy. Along the lines of Chapter 6, Simplifying the Model, we will understand this strategy by asking (and answering, of course) a couple of simple NH questions about this approach.
Note
When searching for information about reducing the precision of deep learning models, you may come across a term known as model quantization. Despite being related terms, the goal of mixed precision is quite different from model quantization. The former intends to accelerate the training process by employing reduced numeric precision formats. The latter focuses on reducing the complexity of trained models to use in the...