5. Conclusion
In this chapter, we've been introduced to autoencoders, which are neural networks that compress input data into low-dimensional representations in order to efficiently perform structural transformations, such as denoising and colorization. We've laid the foundations to the more advanced topics of GANs and VAEs, which we will introduce in later chapters. We've demonstrated how to implement an autoencoder from two building block models, both encoders and decoders. We've also learned how the extraction of a hidden structure of input distribution is one of the common tasks in AI.
Once the latent code has been learned, there are many structural operations that can be performed on the original input distribution. In order to gain a better understanding of the input distribution, the hidden structure in the form of the latent vector can be visualized using low-level embedding, similar to what we did in this chapter, or through more sophisticated...