What this book covers
Chapter 1, Getting Ready to Unlock ML on Microcontrollers, provides an overview of tinyML, presenting the opportunities and challenges to bring ML on extremely low-power microcontrollers. This chapter focuses on the fundamental elements behind ML, power consumption, and microcontrollers that make this technology different from conventional ML in the cloud, desktop, or even smartphones.
Chapter 2, Unleashing Your Creativity with Microcontrollers, presents recipes to deal with the relevant microcontroller programming basics. We will deal with code debugging and how to transmit data to the Arduino serial monitor. The transmitted data will be captured in a log file and uploaded to our cloud storage in Google Drive. Afterward, we will delve into programming the GPIO peripheral using the Arm Mbed API and use a solderless breadboard to connect external components, such as LEDs and push-buttons.
Chapter 3, Building a Weather Station with TensorFlow Lite for Microcontrollers, teaches us how to implement a simple weather station with ML to predict the occurrence of snowfall based on the temperature and humidity of the last three hours. In the first part, we will focus on dataset preparation and show how to acquire historical weather data from WorldWeatherOnline. After preparing the dataset, we will see how to train a neural network with TensorFlow and quantize the model to 8-bit with TensorFlow Lite. In the last part, we will deploy the model on the Arduino Nano 33 BLE Sense and Raspberry Pi Pico with TensorFlow Lite for Microcontrollers.
Chapter 4, Using Edge Impulse and the Arduino Nano to Control LEDs with Voice Commands, shows how to develop an end-to-end keyword spotting (KWS) application with Edge Impulse and the Arduino Nano 33 BLE Sense board. The chapter will begin with dataset preparation, showing how to acquire audio data with a mobile phone and the built-in microphone on the Arduino Nano. Next, we will design a model based on the popular Mel Filterbank Energy (MFE) features for speech recognition. In these recipes, we will show how to extract these features from audio samples, train the machine learning (ML) model, and optimize the performance with the Edge Impulse EON Tuner. At the end of the chapter, we will concentrate on deploying the KWS application.
Chapter 5, Recognizing Music Genres with TensorFlow and the Raspberry Pi Pico – Part 1, is the first part of a project to recognize three music genres from recordings obtained with a microphone connected to Raspberry Pi Pico. The music genres we will classify are disco, jazz, and metal. Since the project offers many learning opportunities, it is split into two chapters to give as much exposure to the technical aspects as possible. Here, we will focus on the dataset preparation and the analysis of the feature extraction technique employed for classifying music genres: the Mel Frequency Cepstral Coefficients (MFCCs).
Chapter 6, Recognizing Music Genres with TensorFlow and Raspberry Pi Pico – Part 2, is the continuation of Chapter 5 and discusses how the target device influences the implementation of the MFCCs feature extraction. We will start our discussion by tailoring the MFCCs implementation for Raspberry Pi Pico.
Here, we will learn how fixed-point arithmetic can help minimize the latency and show how the CMSIS-DSP library provides tremendous support in employing this limited numerical precision in feature extraction. After reimplementing the extraction of the MFCCs using fixed-point arithmetic, we will design an ML model capable of recognizing music genres with a Long Short-Term Memory (LSTM) recurrent neural network (RNN). Finally, we will test the model accuracy on the test dataset and deploy a music genre classification application on Raspberry Pi Pico with the help of TensorFlow Lite for Microcontrollers.
Chapter 7, Detecting Objects with Edge Impulse using FOMO on the Raspberry Pi Pico, showcases the deployment of an object detection application on microcontrollers using Edge Impulse and the Faster Objects, More Objects (FOMO) ML algorithm. The chapter will begin with dataset preparation, demonstrating how to acquire images with a webcam and label them in Edge Impulse. Next, we will design an ML model based on the FOMO algorithm. In this part, we will explore the architectural features of this novel ML solution that allows us to deploy object detection on highly constrained devices. Subsequently, we will test the model using the Edge Impulse Live classification tool and then on the Raspberry Pi Pico.
Chapter 8, Classifying Desk Objects with TensorFlow and the Arduino Nano, demonstrates the benefit of adding sight to our tiny devices by classifying two desk objects with the OV7670 camera module in conjunction with the Arduino Nano 33 BLE Sense board. In the first part, we will learn how to acquire images from the OV7670 camera module. Then, we will focus on the model design, applying transfer learning with the Keras API to recognize two objects we typically find on a desk: a mug and a book. Finally, we will deploy the quantized TensorFlow Lite model on an Arduino Nano 33 BLE Sense with the help of TensorFlow Lite for Microcontrollers.
Chapter 9, Building a Gesture-Based Interface for YouTube Playback with Edge Impulse and the Raspberry Pi Pico, teaches us how to use accelerometer measurements with ML to recognize three hand gestures with Raspberry Pi Pico. These recognized gestures will then be used to play/pause, mute/unmute, and change YouTube videos on our PC. The development of this project will start by acquiring the accelerometer data to build the gesture recognition dataset. In this part, we will learn how to interface with the I2C protocol and use the Edge Impulse data forwarder tool. Next, we will focus on the Impulse design, where we will build a spectral-features-based feed-forward neural network for gesture recognition. Finally, we will deploy the model on the Raspberry Pi Pico and implement a Python script with the PyAutoGUI library to build a touchless interface for YouTube video playback.
Chapter 10, Deploying a CIFAR-10 Model for Memory-Constrained Devices with the Zephyr OS on QEMU, demonstrates how to build an image classification application with TensorFlow Lite for Microcontrollers for an emulated Arm Cortex-M3 microcontroller. To accomplish our task, we will start by installing the Zephyr OS, the primary framework used in this chapter. Next, we will design a tiny quantized CIFAR-10 model with TensorFlow. This model will be capable of running on a microcontroller with only 256 KB of program memory and 64 KB of RAM. Ultimately, we will deploy an image classification application on an emulated Arm Cortex-M3 microcontroller through Quick Emulator (QEMU).
Chapter 11, Running ML Models on Arduino and the Arm Ethos-U55 microNPU Using Apache TVM, explores how to leverage Apache TVM to deploy a quantized CIFAR-10 TensorFlow Lite model in various scenarios. After introducing Arduino CLI, we will present TVM by showing how to generate C code from an ML model and how to run it on a machine hosting the Colab environment. In this chapter, we will also discuss the ahead-of-time (AoT) executor, a crucial feature of TVM that can help reduce the program memory usage of the final application. Then, we will delve into running the model on the Arduino Nano 33 BLE Sense and Raspberry Pi Pico and discuss how to compile a sketch from the code generated by TVM. Finally, we will explore the model deployment on a Micro-Neural Processing Unit (microNPU).
Chapter 12, Enabling Compelling tinyML Solutions with On-Device Learning and scikit-learn on the Arduino Nano and Raspberry Pi Pico, aims to answer three likely questions you might be pondering to bring your tinyML projects to the next level. The first question will delve into the feasibility of training models directly on microcontrollers. In this part, we will discuss the backpropagation algorithm to train a shallow neural network. We will also show how to use the CMSIS-DSP library to accelerate its implementation on any microcontroller with an Arm Cortex-M CPU. After discussing on-device learning, we will tackle another problem: deploying scikit-learn models to microcontrollers. In this second part, we will demonstrate how to deploy generic ML algorithms trained with scikit-learn using the emlearn open-source project. The final question we will answer is about powering microcontrollers with batteries.