Now that we understand the components of TensorFlow Lite, we'll look at how a mobile application works with the TensorFlow components to provide the mobile ML solution.
The mobile application should leverage the TensorFlow Lite model file to perform the inference for future data. The TensorFlow Lite model file can either be packaged with the mobile application and deployed together, or kept separate from the mobile application deployment package. The following diagram depicts the two possible deployment scenarios:
Each deployment has its pros and cons. In the first case, where both are coupled, there is more security for the model file and it can be kept safe and secured. This is a more straightforward approach. However, the application package size is increased due to the size of the model file. In the second case...