The abstraction of the processing of neural networks is mainly achieved through the activation functions. An activation function is a mathematical function which converts the input to an output, and adds the magic of neural network processing. Without activation functions, the working of neural networks will be like linear functions. A linear function is one where the output is directly proportional to input, for example:
Â
A linear function is a polynomial of one degree. Simply, it is a straight line without any curves.
However, most of the problems the neural networks try to solve are nonlinear and complex in nature. To achieve the nonlinearity, the activation functions are used. Nonlinear functions are high degree polynomial functions, for example:
Â
The graph of a nonlinear function is curved and adds the complexity factor.
Activation functions give the nonlinearity property to neural networks and make them true universal function approximators.