In this recipe, we'll be implementing a transformer network from scratch, and we'll be training it for translation tasks from English to German. In the How it works... section, we'll go through a lot of the details.
Getting ready
We recommend using a machine with a GPU. The Colab environment is highly recommended, however, please make sure you are using a runtime with GPU enabled. If you want to check that you have access to a GPU, you can call the NVIDIA System Management Interface:
!nvidia-smi
You should see something like this:
Tesla T4: 0MiB / 15079MiB
This tells you you are using an NVIDIA Tesla T4 with 0 MB of about 1.5 GB used (1 MiB corresponds to approximately 1.049 MB).
We'll need a relatively new version of torchtext, a library with text datasets and utilities for pytorch:
!pip install torchtext==0.7.0
For the part in the There's more... section, you might need to install an additional dependency:
!pip install...