Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Artificial Intelligence for Beginners

You're reading from   Hands-On Artificial Intelligence for Beginners An introduction to AI concepts, algorithms, and their implementation

Arrow left icon
Product type Paperback
Published in Oct 2018
Publisher Packt
ISBN-13 9781788991063
Length 362 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
David Dindi David Dindi
Author Profile Icon David Dindi
David Dindi
Patrick D. Smith Patrick D. Smith
Author Profile Icon Patrick D. Smith
Patrick D. Smith
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. The History of AI 2. Machine Learning Basics FREE CHAPTER 3. Platforms and Other Essentials 4. Your First Artificial Neural Networks 5. Convolutional Neural Networks 6. Recurrent Neural Networks 7. Generative Models 8. Reinforcement Learning 9. Deep Learning for Intelligent Agents 10. Deep Learning for Game Playing 11. Deep Learning for Finance 12. Deep Learning for Robotics 13. Deploying and Maintaining AI Applications 14. Other Books You May Enjoy

The beginnings of AI –1950–1974

Since some of the earliest mathematicians and thinkers, AI has been a long sought after concept. The ancient Greeks developed myths of the automata, a form of robot that would complete tasks for the Gods that they considered menial, and throughout early history thinkers pondered what it meant to human, and if the notion of human intelligence could be replicated. While it's impossible to pinpoint an exact beginning for AI as a field of research, its development parallels the early advances of computer science. One could argue that computer science as a field developed out of this early desire to create self-thinking machines.

During the second world war, British mathematician and code breaker Alan Turing developed some of the first computers, conceived with the vision of AI in mind. Turing wanted to create a machine that would mimic human comprehension, utilizing all available information to reason and make decisions. In 1950, he published Computing Machinery and Intelligence, which introduced what we now call the Turing test of AI. The Turing test, which is a benchmark by which to measure the aptitude of a machine to mimic human interaction, states that to pass the test, the machine must be able to sufficiently fool a discerning judge as to if it is a human or not. This might sound simple, but think about how many complex items would have to be conquered to reach this point. The machine would be able to comprehend, store information on, and respond to natural language, all the while remembering knowledge and responding to situations with what we deem common sense.

Turing could not move far beyond his initial developments; in his day, utilizing a computer for research cost almost $200,000 per month and computers could not store commands. His research and devotion to the field, however, has earned him accolades. Today, he is widely considered the father of AI and the academic study of computer science.

It was in the summer of 1956, however, that the field was truly born. Just a few months before, researchers at the RAND Corporation developed the Logic Theorist – considered the world's first AI program – which proved 38 theorems of the Principia Mathematica. Spurred on by this development and others, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon hosted the now famous Dartmouth Summer Research Project on AI, coining the term Artificial Intelligence itself and providing the groundwork for the field. With funding from the Rockefeller Foundation, these four friends brought together some of the most preeminent researchers in AI over the course of the summer to brainstorm and effectively attempt to provide a roadmap for the field. They came from the institutions and companies that were on the leading edge of the computing revolution at the time; Harvard, Dartmouth, MIT, IBM, Bell Labs, and the RAND Corporation. Their topics of discussion were fairly forward-thinking for the time – they could have easily been those of an AI conference today—Artificial Neural Networks (ANN), natural language processing (NLP), theories of computation, and general computing frameworks. The Summer Research Project was seminal in creating the field of AI as we know it today, and many of its discussion topics spurned the growth of AI research and development through the 1950s and 1960s.

After 1956, innovation kept up a rapid pace. Years later, in 1958, a researcher at the Cornell Aeronautical Laboratory named Frank Rosenblatt invented one of the founding algorithms of AI, the Perceptron. The following diagram shows the Perceptron algorithm:

The Perceptron algorithm

Perceptrons are simple, single-layer networks that work as linear classifiers. They consist of four main architectural aspects which are mentioned as follows:

  • The input layer: The initial layer for reading in data
  • Weight and biases vectors: Weights help learn appropriate values during training for the connections between neurons, while biases help shift the activation function to fit the desired output
  • A summation function: A simple summation of the input
  • An activation function: A simple mapping of the summed weighted input to the output

As you can see, these networks use basic mathematics to perform basic mathematical operations. They failed to live up to the hype, however, and significantly contributed to the first AI winter because of the vast disappointment they created.

Another important development of this early era of research was adaline. As you can see, adaline attempted to improve upon the perceptron by utilizing continuous predicted values to learn the coefficients, unlike the perceptron, which utilizes class labels. The following diagram shows the adaline algorithm:

These golden years also brought us early advances such as the student program that solved high school algebra programs and the ELIZA Chatbot. By 1963, the advances in the field convinced the newly formed Advanced Research Projects Agency (DARPA) to begin funding AI research at MIT.

By the late 1960s, funding in the US and the UK began to dry up. In 1969, a book named Perceptrons by MIT's Marvin Minsky and Seymour Papert (https://archive.org/details/Perceptrons) proved that these networks could only mathematically compute extremely basic functions. In fact, they went so far as to suggest that Rosenblatt had greatly exaggerated his findings and the importance of the perceptron. Perceptrons were of limited functionality to the field, effectively halting research in network structures.

With both governments releasing reports that significantly criticized the usefulness of AI, the field was shuttled into what has become known as the AI winter. AI research continued throughout the late 1960s and 1970s, mostly under different terminology. The terms machine learning, knowledge-based system, and pattern recognition all come from this period, when researchers had to think up creative names for their work in order to receive funding. Around this time, however, a student at the University of Cambridge named Geoffrey Hinton began exploring ANNs and how we could utilize them to mimic the brain's memory functions. We'll talk a lot more about Hinton in the following sections and throughout this book, as he has become one of the most important figures in AI today.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime