Summary
In this chapter, we diverted from the standard presentation flow we adopted in the previous chapters, where we performed exploratory data analysis, created the machine learning models, and evaluated their performance. Instead, the content unfolded while following the historical evolution of MT systems so that you could become acquainted with basic NLP techniques that find applicability in a gamut of tasks. For example, POS tagging and NER are typical methods for categorizing words in a sentence. In the same way, different grammars can be used either for parsing an input phrase or generating an output sentence.
We contrasted two fundamental approaches for creating MT applications, the first of which relies on human knowledge to derive the translation rules. Conversely, data is the driving force for model creation in the second case. Finally, an in-depth presentation of seq2seq models revealed their power to efficiently convert a source sequence into a target.
In the final...