The term Artificial Intelligence (AI) carries a great deal of weight. AI has benefited from over 70 years of research and development. The history of AI is varied and winding, but one ground truth remains – tireless researchers have worked through funding growths and lapses, promise and doubt, to push us toward achieving ever more realistic AI.
Before we begin, let's weed through the buzzwords and marketing and establish what AI really is. For the purposes of this book, we will rely on this definition:
AI is a system or algorithm that allows computers to perform tasks without explicitly being programmed to do so.
AI is an interdisciplinary field. While we'll focus largely on utilizing deep learning in this book, the field also encompasses elements of robotics and IoT, and has a strong overlap (if it hasn't consumed it yet) with generalized natural language processing research. It's also intrinsically linked with fields such as Human-Computer Interaction (HCI) as it becomes increasingly important to integrate AI with our lives and the modern world around us.
AI goes through waves, and is bound to go through another (perhaps smaller) wave in the future. Each time, we push the limits of AI with the computational power that is available to us, and research and development stops. This day and age may be different, as we benefit from the confluence of increasingly large and efficient data stores, rapid fast and cheap computing power, and the funding of some of the most profitable companies in the world. To understand how we ended up here, let's start at the beginning.
In this chapter, we will cover the following topics:
- The beginnings of AI – 1950–1974
- Rebirth – 1980–1987
- The modern era takes hold – 1997–2005
- Deep learning and the future – 2012–Present