Visualizing latent spaces with t-SNE
We now have an autoencoder that takes in a credit card transaction and outputs a credit card transaction that looks more or less the same. However, this is not why we built the autoencoder. The main advantage of an autoencoder is that we can now encode the transaction into a lower dimensional representation that captures the main elements of the transaction.
To create the encoder model, all we have to do is to define a new Keras model that maps from the input to the encoded state:
encoder = Model(data_in,encoded)
Note that you don't need to train this model again. The layers keep the weights from the previously trained autoencoder.
To encode our data, we now use the encoder model:
enc = encoder.predict(X_test)
But how would we know whether these encodings contain any meaningful information about fraud? Once again, visual representation is key. While our encodings have fewer dimensions than the input data, they still have 12 dimensions. It's impossible for humans...