Summary
You learned a lot in this chapter; we first discussed ANNs. ANNs are built from neurons put in multiple layers. Each neuron from one layer is connected to every neuron from the previous layer, and every layer has its own activation function—a function that decides how much each output signal should be blocked.
The step in which an ANN works out the prediction is called forward-propagation and the step in which it learns is called back-propagation. There are three main types of back-propagation: batch gradient descent, stochastic gradient descent, and the best one, mini-batch gradient descent, which mixes the advantages of both previous methods.
The last thing we talked about in this chapter was deep Q-learning. This method uses Neural Networks to predict the Q-Values of taking certain actions. We also mentioned the experience replay memory, which stores a huge chunk of experience for our AI.
In the next chapter, you'll put all of this into...