A little bit more than a decade ago, the main tool for dimensionality reduction with neural networks was Kohonen maps, or self-organizing maps (SOM). They were neural networks that would map data in a discrete, 1D-embedded space. Since then, with faster computers, it is now possible to use deep learning to create embedded spaces.
The trick is to have an intermediate layer that has fewer nodes than the input layer and an output layer that must reproduce the input layer. The data on this intermediate layer will give us the coordinates in an embedded space.
If we use regular dense layers without a specific activation function, we get a linear function from the input to the embedded layer to the output layer. More than one layer to the embedded layer will not change the result of the training and, as such, we get a linear...