We have seen the basic building blocks of CNNs in the previous section. Now, we'll put these building blocks together and see what a complete CNN looks like.
CNNs are almost always stacked together in a block of convolution and pooling pattern. The activation function used for the convolution layer is usually ReLU, as discussed in the previous chapters.
The following diagram shows the first few layers in a typical CNN, made up of a series of convolution and pooling layers:
The final layers in a CNN will always be Fully Connected layers (dense layers) with a sigmoid or softmax activation function. Note that the sigmoid activation function is used for binary classification problems, whereas the softmax activation function is used for multiclass classification problems.
The Fully Connected layer is identical to those that we have seen in the first...