In this chapter, we have implemented some optimizing networks, called autoencoders. An autoencoder is basically a data-compression network model.
It is used to encode a given input into a representation of a smaller dimension; then, a decoder can be used to reconstruct the input back from the encoded version. All the autoencoders we implemented contain an encoding, and a decoding, part.
We have also looked at how to improve the autoencoder's performance, introducing a noise during network training, and building a denoising autoencoder. Finally, we applied the concepts of the CNN networks introduced in Chapter 4, TensorFlow on a Convolutional Neural Network, with the implementation of convolutional autoencoders.
In the next chapter, we'll examine Recurrent Neural Networks (RNNs). We will start by describing the basic principles of these networks, and then we'll implement some interesting example...