In the previous chapters, we learned about RNN applications, where there are multiple inputs (one each in each time step) and a single output. However, there are a few more applications where there are multiple inputs, and also multiple time steps—machine translation for example, where there are multiple input words in a source sentence and multiple output words in the target sentence. Given the multiple inputs and multiple outputs, this becomes a multi-output RNN-based application—essentially, a sequence to sequence learning task. This calls for building our model architecture differently to what we have built so far, which we will learn about in this chapter. In this chapter, we are going to learn about the following:
- Returning sequences from a network
- How bidirectional LSTM helps in named entity extraction
- Extract intent and entities...