In the last chapter, we submerged ourselves in the world of autoencoding neural networks. We saw how these models can be used to estimate parameterized functions capable of reconstructing given inputs with respect to target outputs. While at prima facie this may seem trivial, we now know that this manner of self-supervised encoding has several theoretical and practical implications.
In fact, from a machine learning (ML) perspective, the ability to approximate a connected set of points in a higher dimensional space on to a lower dimensional space (that is, manifold learning) has several advantages, ranging from higher data storage efficiency to more efficient memory consumption. Practically speaking, this allows us to discover ideal coding schemes for different types of data, or to perform dimensionality reduction thereupon, for use cases such as Principal Component...