Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Deep Learning for IoT

You're reading from  Hands-On Deep Learning for IoT

Product type Book
Published in Jun 2019
Publisher Packt
ISBN-13 9781789616132
Pages 308 pages
Edition 1st Edition
Languages
Authors (2):
Dr. Mohammad Abdur Razzaque Dr. Mohammad Abdur Razzaque
Profile icon Dr. Mohammad Abdur Razzaque
Md. Rezaul Karim Md. Rezaul Karim
Profile icon Md. Rezaul Karim
View More author details
Toc

Table of Contents (15) Chapters close

Preface 1. Section 1: IoT Ecosystems, Deep Learning Techniques, and Frameworks
2. The End-to-End Life Cycle of the IoT 3. Deep Learning Architectures for IoT 4. Section 2: Hands-On Deep Learning Application Development for IoT
5. Image Recognition in IoT 6. Audio/Speech/Voice Recognition in IoT 7. Indoor Localization in IoT 8. Physiological and Psychological State Detection in IoT 9. IoT Security 10. Section 3: Advanced Aspects and Analytics in IoT
11. Predictive Maintenance for IoT 12. Deep Learning in Healthcare IoT 13. What's Next - Wrapping Up and Future Directions 14. Other Books You May Enjoy

Model evaluation

We can evaluate three different aspects of the models:

  • Learning/(re)training time
  • Storage requirement
  • Performance (accuracy)

On a desktop (Intel Xenon CPU E5-1650 v3@3.5GHz and 32 GB RAM) with GPU support, the training of LSTM on the CPU-utilization dataset and the autoencoder on the KDD layered wise dataset (reduced dataset) took a few minutes. The DNN model on the overall dataset took a little over an hour, which was expected as it has been trained on a larger dataset (KDD's overall 10% dataset).

The storage requirement of a model is an essential consideration in resource-constrained IoT devices. The following screenshot presents the storage requirements for the three models we tested for the two use cases:

As shown in the screenshot, autoencoders took storage in the range of KB. The final version of a stored autoencoder model took only 85 KB, LSTM...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime