Summary
In this chapter, our focus has been on tailoring an ML model for image classification on a memory-constrained device with just 64 KB of SRAM.
In the first part, we prepared the Zephyr development environment by installing the components required to build and run a Zephyr application on virtual devices with QEMU.
Following the Zephyr installation, our attention shifted to model design. Here, we designed and trained a CNN based on the depthwise separable convolution layer, allowing us to reduce the training parameters and computational demand drastically.
Once the model was trained, we quantized it to 8-bit using the TensorFlow Lite converter and assessed its accuracy on the validation dataset. The evaluation proved that quantizing to 8-bit only marginally reduces the model’s accuracy.
Finally, we developed the Zephyr application to deploy the TensorFlow Lite quantized model and run it on the virtual device.
TensorFlow Lite for Microcontrollers has...