Multi-layer perceptrons
Multi-layer perceptrons (MLP) are the most basic forms of neural networks. An MLP consists of three components: an input layer, a bunch of hidden layers, and an output layer. An input layer represents a vector of regressors or input features, for example, observations from preceding p points in time [xt-1,xt-2, ... ,xt-p]. The input features are fed to a hidden layer that has n neurons, each of which applies a linear transformation and a nonlinear activation to the input features. The output of a neuron is gi =  h(wix + bi), where wi and bi are the weights and bias of the linear transformation and h is a nonlinear activation function. The nonlinear activation function enables the neural network to model complex non-linearities of the underlying relations between the regressors and the target variable. Popularly, h is the sigmoid function,
, that squashes any real number to the interval [0,1]. Due to this property, the sigmoid function is used to generate binary class...