Activating neurons with activation functions
We reviewed how weights and biases contribute to a model’s predictions in the previous section. However, the fourth step in Figure 11.2 involves something called an activation function. What is an activation function anyway?
In the intricate architecture of NNs, activation functions are the gears that infuse life and non-linearity into the system. Activation functions are mathematical functions that are applied to the output of each neuron, introducing non-linearity to the outputs. This is a key distinction between the application of weights and biases in linear regression. Let’s explore the role and types of activation functions that breathe vitality into NNs.
At its core, non-linearity allows NNs to capture complex patterns in data that a linear approach would miss. Imagine trying to fit a straight line to data that twists and turns in various directions. A linear model would fail to capture the intricacies, but with...