We have seen the concept of how a single neuron or perceptron works; so now, let's expand the concept to the idea of deep learning. The following diagram shows us what multiple perceptrons look like:
In the preceding diagram, we can see various layers of single perceptrons connected to each other through their inputs and outputs. The input layer is violet, the hidden layers are blue and green, and the output layer of the network is represented in red.
Input layers are real values from the data, so they take in actual data as their input. The next layers are the hidden layers, which are between the input and output layers. If three or more hidden layers are present, then it's considered a deep neural network. The final layer is the output layer, where we have some sort of final estimation of whatever the output that we are trying to estimate is. As we progress through more layers, the level of...