Evaluating the accuracy of the quantized model
The trained model can classify the 10 classes of CIFAR-10 with an accuracy of 71.9%. However, before deploying the model on a microcontroller, it must be quantized with TensorFlow Lite, which may reduce the accuracy.
In this recipe, we will demonstrate the quantization process and perform an accuracy evaluation on the validation dataset using the TensorFlow Lite Python interpreter. The reason for using the validation rather than the test dataset is to assess how much the 8-bit quantization alters the accuracy observed during model training. Following the accuracy evaluation, we will finalize the recipe by converting the TensorFlow Lite model into a C-byte array.
Getting ready
As we know, the trained model must be converted into a more compact and lightweight representation before being deployed on a resource-constrained device such as a microcontroller.
Quantization is the essential part of this step to make the model...