Exploring activation functions
An activation function, also known as a transfer function, plays a vital role in neural networks. It is used to introduce non-linearity in neural networks. As we learned before, we apply the activation function to the input, which is multiplied by weights and added to the bias, that is, f(z), where z = (input * weights) + bias and f(.) is the activation function.
If we do not apply the activation function, then a neuron simply resembles the linear regression. The aim of the activation function is to introduce a non-linear transformation to learn the complex underlying patterns in the data.
Now let's look at some of the interesting commonly used activation functions.
The sigmoid function
The sigmoid function is one of the most commonly used activation functions. It scales the value between 0 and 1. The sigmoid function can be defined as follows:
It is an S-shaped curve shown in Figure 7.4:
Figure 7.4: Sigmoid function...