TensorFlow Lite
TensorFlow Lite is a lightweight platform designed by TensorFlow. This platform is focused on mobile and embedded devices such as Android, iOS, and Raspberry PI. The main goal is to enable machine learning inference directly on the device by putting a lot of effort in three main characteristics: (1) small binary and model size to save on memory, (2) low energy consumption to save on the battery, and (3) low latency for efficiency. It goes without saying that battery and memory are two important resources for mobile and embedded devices. In order to achieve these goals, Lite uses a number of techniques such as Quantization, FlatBuffers, Mobile interpreter, and Mobile converter, which we are going to review briefly in the following sections.
Quantization
Quantization refers to a set of techniques that constrains an input made of continuous values (such as real numbers) into a discrete set (such as integers). The key idea is to reduce the space occupancy of Deep...