When we described deep learning earlier, we noted that the defining characteristic is the presence of hidden layers comprised of neurons that contain the weighted sum of all predictor variables in a dataset. We just addressed how this array of interconnected neurons is modeled after the human brain. Now let's take a deeper dive into what is happening in these hidden layers where neurons are created.
At this point, we can deduce the following:
- We understand that all variables receive a coefficient at random for each neuron based on how many units we want to create in each layer.
- The algorithm then continues to make changes to these coefficients until it minimizes the error rate.
- However, there is one additional coefficient present during this process of passing weighted values to the neurons, and that is known as...