To leverage cutting-edge deep learning networks on mobile platforms, it becomes extremely important to effectively tune the learning of a network such that we can do the most with the least resources. The implementation of the neural network for OCR by the Google Translate team is an interesting one to understand the few thumb rules to circumvent the network from growing too big.
Following are excerpts from the press release from Google, found at: https://translate.googleblog.com/2015/07/how-google-translate-squeezes-deep.html:
"We needed to develop a very small neural net, and put severe limits on how much we tried to teach it-in essence, put an upper bound on the density of information it handles. The challenge here was in creating the most effective training data. Since we're generating our own training data, we put...