It wasn't until the 1980s that Heubel and Wiesel's findings were repurposed in the field of computer science. The Neurocognitron (Fukushima, 1980: https://www.rctn.org/bruno/public/papers/Fukushima1980.pdf) leveraged the concept of simple and complex cells by sandwiching layers of one after the other. This ancestor of the modern neural network used the aforementioned alternating layers to sequentially include modifiable parameters (or simple cells), while using pooling layers (or complex cells) to make the network invariant to minor altercations from the simple cells. While intuitive, this architecture was still not powerful enough to capture the intricate complexities present in visual signals.
One of the major breakthroughs followed in 1998, when famed AI researchers, Yan Lecun and Yoshua Bengio, were able to train a CNN, leveraging gradient...