AI has made further strides in the past several years than in the 60-odd years since its birth. Its popularity has further been fueled by the increasingly public nature of its benefits – self-driving cars, personal assistants, and its ever-ubiquitous use in social media and advertising. For most of its history, AI was a field with little interaction with the average populace, but now it's come to the forefront of international discourse.
Today's age of AI has been the result of three trends:
- The increasing amount of data and computing power available to AI researchers and practitioners
- Ongoing research by Geoffrey Hinton and his lab at the University of Toronto into deep neural networks
- Increasingly public applications of AI that have driven adoption and further acceptance into mainstream technology culture
Today, companies, governments, and other organizations have benefited from the big data revolution of the mid 2000s, which has brought us a plethora of data stores. At last, AI applications have the requisite data to train. Computational power is cheap and only getting cheaper.
On the research front, in 2012, Hinton and two of his students were finally able to show that deep neural networks were able to outperform all other methods in image recognition in the large-scale visual recognition challenge. The modern era of AI was born.
Interestingly enough, Hinton's team's work on computer vision also introduced the idea of utilizing Graphics Processing Units (GPUs) to train deep networks. It also introduced dropout and ReLu, which have become cornerstones of deep learning. We'll discuss these in the coming chapters. Today, Hinton is the most cited AI researcher on the planet. He is a lead data scientist at Google Brain and has been tied to many major developments in AI in the modern era.
AI was further thrown into the public sphere when, in 2011, IBM Watson defeated the world Jeopardy champions, and in 2016 Google's AlphaGo defeated the world grand champion at one of the most challenging games known to man: Go.
Today, we are closer than ever to having machines that can pass the Turing test. Networks are able to generate ever more realistic sounding imitations of speeches, images, and writing. Reinforcement learning methods and Ian Goodfellow's GANs have made incredible strides. Recently, there has been emerging research that is working to demystify the inner workings of deep neural networks. As the field progresses, however, we should all be mindful of overpromising. For most of its history, companies have often overpromised regarding what AI can do, and in turn, we've seen a consistent disappointment in its abilities. Focusing the abilities of AI on only certain applications, and continuing to view research in the field from a biological perspective, will only hurt its advancement going forward. In this book, however, we'll see that today's practical applications are directed and realistic, and that the field is making more strides toward true AI than ever before.