Getting ready
The main advantage we have found in all projects developed with tflite-micro is certainly code portability. Regardless of the target device, the model inference can be accelerated on various devices using almost the same application code, which can be exemplified with the following pseudocode:
model = load_model(tflite_model)
model.allocate_memory()
model.invoke();
In the preceding code snippet, we do the following:
- Load the model at runtime with
load_model()
- Allocate the memory required for the model inference with
allocate_memory()
- Invoke the model inference with
invoke()
When writing the tflite-micro application code, it is not strictly necessary to have prior knowledge of the target microcontroller because the software stack takes advantage of vendor-specific optimized operator libraries (performance libraries) to execute the model efficiently. As a result, the selection of the appropriate set of optimized operators happens...