Every neuron must have an activation function. They are what gives the neuron the nonlinear property necessary to model the complex nonlinear datasets. The function takes the weighted sum of all the inputs and generates an output signal. You can think of it as a transform between input and output. Using the proper activation function, we can bound our output values in a defined range.
If xj is the jth input, Wj the weight connecting jth input to our neuron, and b the bias of our neuron, the output of the neuron (in biological terms, firing of the neuron) is decided by the activation function, and mathematically it is expressed as follows:
Here, g represents the activation function. The argument to the activation function ∑Wjxj+b is called activity of the neuron.