We've taken a whirlwind tour of Question-Answering as a natural language understanding problem and learned how to build a generic memory network model for any QA task. We then investigated the problem of conversation modelling as a QA task, and extended the memory network to train a goal-oriented chatbot.
We built a simple retrieval-based chatbot to help users book restaurants according to their preferences. Some of the aspects that readers can explore further could be more sophisticated attention mechanisms, more powerful representation encoders for sentences, and using generative models instead of retrieval methods.
In the next chapter, we shall cover language translation using encoder-decoder models and introduce more complicated attention mechanisms for sequence alignment.