ReLU is the activation layer of choice for CNNs. We studied activation layers in Chapter 2, Dive Deep into Deep Neural Networks. As we know, we need to introduce non-linearity to our model as the convolution process is linear. So, we apply an activation function to the output of the CNN.
The ReLU function simply changes all the negative values to 0, while positive values are unchanged, as shown in the following screenshot:
Fig 6.13: ReLU
An example of ReLU introducing non-linearity in the feature map's output can be seen in the following screenshot:
Fig 6.14: Applying ReLU
In the next section, we will learn about fully connected layers.