Thus far, much of our work was with images. Working with images is helpful as the results are almost uncanny in how quickly and succinctly progress can be made. However, the world of machine learning is broader and the next several chapters will cover these other aspects. We will start with sequence-to-sequence models. The results are just as uncanny, though the setup is a bit more involved and training datasets are much larger.
In this chapter, we will focus on several areas, which are as follows:
- Understanding how sequence-to-sequence models work
- Understanding the setup required to feed a sequence-to-sequence model
- Writing an English to French translator using sequence-to-sequence models