Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Python Deep Learning

You're reading from   Python Deep Learning Exploring deep learning techniques and neural network architectures with PyTorch, Keras, and TensorFlow

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher Packt
ISBN-13 9781789348460
Length 386 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (5):
Arrow left icon
Gianmario Spacagna Gianmario Spacagna
Author Profile Icon Gianmario Spacagna
Gianmario Spacagna
Daniel Slater Daniel Slater
Author Profile Icon Daniel Slater
Daniel Slater
Valentino Zocca Valentino Zocca
Author Profile Icon Valentino Zocca
Valentino Zocca
Peter Roelants Peter Roelants
Author Profile Icon Peter Roelants
Peter Roelants
Ivan Vasilev Ivan Vasilev
Author Profile Icon Ivan Vasilev
Ivan Vasilev
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Machine Learning - an Introduction 2. Neural Networks FREE CHAPTER 3. Deep Learning Fundamentals 4. Computer Vision with Convolutional Networks 5. Advanced Computer Vision 6. Generating Images with GANs and VAEs 7. Recurrent Neural Networks and Language Models 8. Reinforcement Learning Theory 9. Deep Reinforcement Learning for Games 10. Deep Learning in Autonomous Vehicles 11. Other Books You May Enjoy

Variational autoencoders

To understand VAEs, let's talk about regular autoencoders first. An autoencoder is a feed-forward neural network that tries to reproduce its input. In other words, the target value (label) of an autoencoder is equal to the input data, yi = xi, where i is the sample index.We can formally say that it tries to learn an identity function, (a function that repeats its input). Since our "labels" are just the input data, the autoencoder is an unsupervised algorithm. The following diagram represents an autoencoder:

An autoencoder

An autoencoder consists of an input, hidden (or bottleneck), and output layers. Although it's a single network, we can think of it as a virtual composition of two components:

  • Encoder: Maps the input data to the network's internal representation. For the sake of simplicity, in this example the encoder is a...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime