Generally, DL models require calculations of ultra-large (in the range of millions to billions) numbers of parameter, which necessitates a powerful computing platform with huge storage support, which is not available in IoT devices or platforms. Fortunately, there are existing methods and technologies (this is not used in this book, as we did the model training on a desktop) that can address a few of the aforementioned issues in IoT devices and thus support DL on them:
- DL network compression: DL networks are generally dense and require huge computational power and memory that may not be available in IoT devices. This is required even to do the inferencing and/or classification. DL network compression, which converts a dense network into a sparse network, is a potential solution for resource-constrained IoT...