As you saw in Chapter 2, Creating Models Using FastText Command Line, you can create a compressed fastText model from a whole model using a command similar to this one:
$ ./fasttext quantize -output <model prefix> -input <training file> -qnorm -retrain -epoch <number of epochs> -cutoff <number of words to consider>
In Chapter 4, Sentence Classification in FastText, we also revisited the concept of having compressed models and how compression was achieved without much loss in performance.
This enables you to deploy machines in smaller devices as well. One of the first things that comes to mind is whether the files can be packaged with an Android app and deployed in an Android application.
In this section, I will put into place all the requirements and dependencies that should enable you to deploy an Android fastText application...