The origin of neural networks comes from the fact that every function cannot be approximated by a linear/logistic regression—there can be potentially complex shapes within data that can only be approximated by complex functions.
The more complex the function (with some way to take care of overfitting), the better the prediction accuracy.
The following image explains the way in which neural networks work towards fitting data into a model.
The typical structure of a neural network is as follows:
The input level/layer in this diagram is typically made up of the independent variables that are used to predict the output (dependent variable) level or layer.
The hidden level/layer is used to transform the input variables into a higher-order function. The way in which a hidden layer transforms the output is as follows:
In the preceding diagram, x1...