The basics of Recurrent Neural Networks
RNNs are another type of popular model that is currently gaining a lot of traction. As we discussed in Chapter 1, Introduction to Artificial Intelligence, the study of neural networks in general and RNNs in particular is the domain of the connectionist tribe (as described in Pedro Domingos' AI classification). RNNs are frequently used to tackle Natural Language Processing (NLP) and Natural Language Understanding (NLU) problems.
The math behind RNNs can be overwhelming at times. Before we get into the nitty gritty of RNNs, keep this thought in mind: a race car driver does not need to fully understand the mechanics of their car to make it go fast and win races. Similarly, we don't necessarily need to fully understand how RNNs work under the hood to make them do useful and sometimes impressive work for us. Francois Chollet, the creator of the Keras library, describes Long Short-Term Memory (LSTM) networks – which are a form...