A more efficient network
Training the previous model requires 686 seconds on my laptop, and achieves a validation accuracy of 74.5%, and a training accuracy of 91.4%. Ideally, to improve the efficiency, we want to keep accuracy at the same level while reducing the training time.
Let's check some of the convolutional layers:
We have already seen these activation graphs in Chapter 5, Deep Learning Workflow, and we know that channels that are black do not achieve a big activation, so they don't contribute much to the result. In practice, it looks like half of the channels are not in use. Let's try to halve the number of channels in every convolutional layer:
model.add(Conv2D(filters=16, kernel_size=(3, 3), activation='relu', input_shape=x_train.shape[1:], padding="same")) model.add(Conv2D(filters=16, kernel_size=(3, 3), activation='relu',  ...