It was only in the late 70s to early 80s that neural networks got some attention put back on them. Several research papers introduced how neural networks, with multiple layers of perceptrons put one after the other, could be trained using a rather straightforward scheme—backpropagation. As we will detail in the next section, this training procedure works by computing the network's error and backpropagating it through the layers of perceptrons to update their parameters using derivatives. Soon after, the first convolutional neural network (CNN), the ancestor of current recognition methods, was developed and applied to the recognition of handwritten characters with some success.
Alas, these methods were computationally heavy, and just could not scale to larger problems. Instead, researchers adopted lighter machine learning methods such as SVMs, and the use of neural networks stalled for another decade. So, what brought them back and led to the deep learning...