Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning By Example

You're reading from  Deep Learning By Example

Product type Book
Published in Feb 2018
Publisher Packt
ISBN-13 9781788399906
Pages 450 pages
Edition 1st Edition
Languages
Toc

Table of Contents (18) Chapters close

Preface 1. Data Science - A Birds' Eye View 2. Data Modeling in Action - The Titanic Example 3. Feature Engineering and Model Complexity – The Titanic Example Revisited 4. Get Up and Running with TensorFlow 5. TensorFlow in Action - Some Basic Examples 6. Deep Feed-forward Neural Networks - Implementing Digit Classification 7. Introduction to Convolutional Neural Networks 8. Object Detection – CIFAR-10 Example 9. Object Detection – Transfer Learning with CNNs 10. Recurrent-Type Neural Networks - Language Modeling 11. Representation Learning - Implementing Word Embeddings 12. Neural Sentiment Analysis 13. Autoencoders – Feature Extraction and Denoising 14. Generative Adversarial Networks 15. Face Generation and Handling Missing Labels 16. Implementing Fish Recognition 17. Other Books You May Enjoy

Examples of autoencoders

In this chapter, we will demonstrate some examples of different variations of autoencoders using the MNIST dataset. As a concrete example, suppose the inputs x are the pixel intensity values from a 28 x 28 image (784 pixels); so the number of input data samples is n=784. There are s2=392 hidden units in layer L2. And since the output will be of the same dimensions as the input data samples, y ∈ R784. The number of neurons in the input layer will be 784, followed by 392 neurons in the middle layer L2; so the network will be a lower representation, which is a compressed version of the output. The network will then feed this compressed lower representation of the input a(L2) ∈ R392 to the second part of the network, which will try hard to reconstruct the input pixels 784 from this compressed version.

Autoencoders rely on the fact that the input...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime