An autoencoder is an unsupervised type of FNN that learns to reconstruct high-dimensional data using latent-encoded data. You can think of it as trying to learn an identity function (that is, take x as input and then predict x).
Let's start by taking a look at the following diagram, which shows you what an autoencoder looks like:
As you can see, the network is split into two components—an encoder and a decoder—which are mirror images of each other. The two components are connected to each other through a bottleneck layer (sometimes referred to as either a latent-space representation or compression) that has dimensions that are a lot smaller than the input. You should note that the network architecture is symmetric, but that doesn't necessarily mean its weights need be. But why? What does this network learn and how does it do it? Let&apos...