The 1980s saw the birth of deep learning, the brain of AI that has become the focus of most modern AI research. With the revival of neural network research by John Hopfield and David Rumelhart, and several funding initiatives in Japan, the United States, and the United Kingdom, AI research was back on track.
In the early 1980s, while the United States was still toiling from the effects of the AI Winter, Japan was funding the fifth generation computer system project to advance AI research. In the US, DARPA once again ramped up funding for AI research, with business regaining interest in AI applications. IBM's T.J. Watson Research Center published a statistical approach to language translation (https://aclanthology.info/pdf/J/J90/J90-2002.pdf), which replaced traditional rule-based NLP models with probabilistic models, the shepherding in the modern era of NLP.
Hinton, the student from the University of Cambridge who persisted in his research, would make a name for himself by coining the term deep learning. He joined forces with Rumelhart to become one of the first researchers to introduce the backpropagation algorithm for training ANNs, which is the backbone of all of modern deep learning. Hinton, like many others before him, was limited by computational power, and it would take another 26 years before the weight of his discovery was really felt.
By the late 1980s, the personal computing revolution and missed expectations threatened the field. Commercial development all but came to a halt, as mainframe computer manufacturers stopped producing hardware that could handle AI-oriented languages, and AI-oriented mainframe manufacturers went bankrupt. It had seemed as if all had come to a standstill.