Introducing sequence-to-sequence learning
Many kinds of problems in machine learning involve transforming an input sequence into an output one. Sequence-to-sequence (seq2seq) learning has proven useful in applications that demand this transformation. For instance, free-form question answering (generating a natural language answer to a natural language question), text summarization, conversational interfaces such as chatbots, and so forth can benefit from seq2seq learning. It is not surprising that MT applications can also exploit this technique to convert a source sequence, such as an English phrase, into the corresponding target sequence, such as an Arabic translation. Seq2seq, pronounced as seek-to-seek, learning falls under the category of neural MT, and unlike solutions based on RBMT and SMT, no domain knowledge of the languages involved is necessary. You can treat the translation problem as the association between input and output tokens of words or characters. Moreover, the translation...