Conclusion
In this chapter, we've been introduced to autoencoders, which are neural networks that compress input data into low-dimensional codes in order to efficiently perform structural transformations such as denoising and colorization. We've laid the foundations to the more advanced topics of GANs and VAEs, that we will introduce in later chapters, while still exploring how autoencoders can utilize Keras. We've demonstrated how to implement an autoencoder from two building block models, both encoder and decoder. We've also learned how the extraction of a hidden structure of input distribution is one of the common tasks in AI.
Once the latent code has been uncovered, there are many structural operations that can be performed on the original input distribution. In order to gain a better understanding of the input distribution, the hidden structure in the form of the latent vector can be visualized using low-level embedding similar to what we did in this chapter or through more sophisticated...