We will use the same convolutional autoencoder architecture for this task. However, we will reinitialize the model and train it from scratch once again, this time with the noisy input variables:
autoencoder.compile(optimizer='adam', loss='mse')
autoencoder.fit(x_train_noisy, x_train, epochs=50, batch_size=20,
shuffle=True, verbose=1)
Epoch 1/50875/875 [==============================] - 7s 8ms/step - loss: 0.0449
Epoch 2/50
875/875 [==============================] - 6s 7ms/step - loss: 0.0212
Epoch 3/50
875/875 [==============================] - 6s 7ms/step - loss: 0.0185
Epoch 4/50
875/875 [==============================] - 6s 7ms/step - loss: 0.0169
As we can see, the loss converges much more reluctantly in the case of the denoising autoencoder than for our previous experiments. This is naturally the case, as...