We can evaluate three different aspects of the models:
- Learning/(re)training time
- Storage requirement
- Performance (accuracy)
On a desktop (Intel Xenon CPU E5-1650 v3@3.5GHz and 32 GB RAM) with GPU support, the training of LSTM on the CPU-utilization dataset and the autoencoder on the KDD layered wise dataset (reduced dataset) took a few minutes. The DNN model on the overall dataset took a little over an hour, which was expected as it has been trained on a larger dataset (KDD's overall 10% dataset).
The storage requirement of a model is an essential consideration in resource-constrained IoT devices. The following screenshot presents the storage requirements for the three models we tested for the two use cases:
![](https://static.packt-cdn.com/products/9781789616132/graphics/assets/5fc04244-20c1-4be5-b407-4b052d45d4a9.png)
As shown in the screenshot, autoencoders took storage in the range of KB. The final version of a stored autoencoder model took only 85 KB, LSTM...