- Is overfitting a bad thing for an autoencoder?
Actually, no. You want the autoencoder to overfit! That is, you want it to exactly replicate the input data in the output. However, there is a caveat. Your dataset must be really large in comparison to the size of the model; otherwise, the memorization of the data will prevent the model from generalizing to unseen data.
- Why did we use two neurons in the encoder's last layer?
For visualization purposes only. The two-dimensional latent space produced by the two neurons allows us to easily visualize the data in the latent space. In the next chapter, we will use other configurations that do not necessarily have a two-dimensional latent space.
- What is so cool about autoencoders again?
They are simple neural models that learn without a teacher (unsupervised). They are not biased toward learning specific labels (classes). They learn about the world of data through iterative observations, aiming to learn the...