Autoencoders are useful for representing the input efficiently. In their 2016 paper, Makhazani and others showed that adversarial autoencoders can create clearer representations than variational autoencoders, and – similar to the DCGAN that we saw in the previous recipe – we get the added benefit of learning to create new examples, which can help in semi-supervised or supervised learning scenarios, and allow training with less labeled data. Representing in a compressed fashion can also help in content-based retrieval.
In this recipe, we'll implement an adversarial autoencoder in PyTorch. We'll implement both supervised and unsupervised approaches and show the results. There's a nice clustering by classes in the unsupervised approach, and in the supervised approach, our encoder-decoder architecture can identify styles, which gives us the ability to do style transfer. In this recipe, we'll use the hello world dataset of computer...