Auto-encoders (AEs) are a special type of encoders-decoders. As shown in Figure 6-1, their input and target domains are the same, so their goal is to properly encode and then decode images without impacting their quality, despite their bottleneck (their latent space of lower dimensionality). The inputs are reduced to a compressed representation (as feature vectors). If an original input is requested later on, it can be reconstructed from its compressed representation by the decoder.
JPEG tools can thus be called AEs, as their goal is to encode images and then decode them back without losing too much of their quality. The distance between the input and output data is the typical loss to minimize for auto-encoding algorithms. For images, this distance can simply be computed as the cross-entropy loss, or as the L1/L2 loss (Manhattan and Euclidean distances, respectively) between the input images and resulting images (as illustrated in Chapter 3, Modern Neural Networks...