Summary
Autoencoders are considered a fundamental method to achieve representation learning across data modalities. Consider the architecture as a shell that you can fit in a variety of other neural network components, allowing you to ingest data of different modalities or benefit from more advanced neural network components.
However, do note that they are not the only method to learn representative features. There are many more applications for autoencoders that primarily revolve around different training objectives using the same architecture. Two of these adaptations that were briefly introduced in this chapter are denoising autoencoders and variational autoencoders, which will be introduced properly in Chapter 9, Exploring Unsupervised Deep Learning. Now, let’s shift gears again to discover the model family of transformers!