Latent spaces, as we defined them in Chapter 7, Autoencoders, are very important in DL because they can lead to powerful decision-making systems that are based on assumed rich latent representations. And, once again, what makes the latent spaces produced by autoencoders (and other unsupervised models) rich in their representations is that they are not biased toward particular labels.
In Chapter 7, Autoencoders, we explored the MNIST dataset, which is a standard dataset in DL, and showed that we can easily find very good latent representations with as few as four dense layers in the encoder and eight layers for the entire autoencoder model. In the next section, we will take on a much more difficult dataset known as CIFAR-10, and then we will come back to explore the latent representation of the IMDB dataset, which we have already explored briefly in the previous sections of this chapter.
CIFAR-10
In 2009, the Canadian Institute for Advanced...