In this section, we learned how to create novel, state-of-the-art intelligent assistants by using word embeddings and ANNs. Word embedding techniques are the cornerstone of AI applications for natural language. They allow us to encode natural language as mathematics that we can feed into downstream models and tasks.
Intelligent agents take these word embeddings and reason over them. They utilize two RNNs, an encoder and a decoder, in what is called a Seq2Seq model. If you cast your mind back to the chapter on recurrent neural networks, the first RNN in the Seq2Seq model encodes the input into a compressed representation, while the second network draws from that compressed representation to deliver sentences. In this way, an intelligent agent learns to respond to a user based on a representation of what it learned during the training process.
In the next chapter, we&apos...