Case study: TensorFlow Lite
Given the four challenges defined in TinyML, the TensorFlow team implemented a specific platform for TinyML called TensorFlow Lite.
Now let's talk about how TensorFlow Lite handles each of the TinyML challenges one by one here.
First, to reduce the total power consumption, TensorFlow Lite can run the model without maintaining the following metadata:
- Layer dependency
- Computation graph
- Holding intermediate results
Second, to avoid the unstable connection issue, TensorFlow Lite removes all the unnecessary communication between the server and devices. Once the model is deployed on the device, normally no specific communication is needed between the central server and the deployed devices.
Third, to reduce the high latency for communication, TensorFlow Lite enables faster (real-time) model inference by doing the following:
- Reducing the code footprint
- Directly feeding the data into the model as the data does not...