There are several reasons that have led to deep learning to be developed and placed at the center of attention in the field of machine learning only in recent decades.
One reason, perhaps the main one, is surely represented by the progress in hardware, with the availability of new processors, such as graphics processing units (GPUs), which have greatly reduced the time needed for training networks, lowering them to 10/20 times.
In fact, since the connections between the individual neurons have a weight numerically estimated, and that networks learn by calibrating the weights properly, we understand how the network's complexity requires a huge increase, in computing power, required for graphics processors used in the experiments.