Summary
That's it! You made it to the end of this exhaustive chapter and also to the end of this book!
In this chapter, we designed an end-to-end chatbot NLU pipeline. As a first task, we explored our dataset. By doing this, we collected linguistic information about the utterances and understood the slot types and their corresponding values. Then, we performed a significant task of chatbot NLU, entity extraction. We extracted several types of entities such as city, date/time, and cuisine with the spaCy NER model as well as Matcher. Then, we performed another traditional chatbot NLU pipeline task – intent recognition. We trained a character-level LSTM model with TensorFlow and Keras.
In the last section, we dived into sentence-level and dialog-level semantics. We worked on sentence syntax by differentiating subjects from objects, then learned about sentence types and finally learned about the linguistic concept of anaphora resolution. We applied what we learned in...