When performing ML on the edge, you lose some of the luxuries you tend to have when running on a more powerful device (albeit this is shifting all the time). Here is a list of considerations to keep in mind:
- Model size: Previously, we walked through building a simple linear regression model. The model itself consists of two floats (bias and weight coefficients), which of course are negligible in terms of memory requirements. But, as you dive into the world of deep learning, it's common to find models hundreds of megabytes in size. For example, the VGG16 model is a 16-layer conventional neural network architecture trained on the ImageNet dataset used for image classification, available on Apple's site. It is just over 500 megabytes. Currently, Apple allows apps 2 gigabytes in size, but asking your user to download such a large file...