Autoencoders (AEs) are neural networks that are of a feedforward and non-recurrent type. They aim to copy the given inputs to the outputs. An AE works by compressing the input into a lower dimensional summary. This summary is often referred as latent space representation. An AE attempts to reconstruct the output from the latent space representation. An Encoder, a Latent Space Representation, and a Decoder are the three parts that make up the AEs. The following figure is an illustration showing the application of an AE on a sample picked from the MNIST dataset:
![](https://static.packt-cdn.com/products/9781789807943/graphics/assets/1ec6a118-78cf-4d2c-9747-e56c5f0a31fb.png)
The encoder and decoder components of AEs are fully-connected feedforward networks. The number of neurons in a latent space representation is a hyperparameter that needs to be passed as part of building the AE. The number of neurons or nodes that is decided...