Implementing GPUs in code
PyTorch
, among other languages and frameworks, manages GPUs. PyTorch contains tensors just as TensorFlow does. A tensor may look like NumPy np.arrays()
. However, NumPy is not fit for parallel processing. Tensors use the parallel processing features of GPUs.
Tensors open the doors to distributed data over GPUs in PyTorch, among other frameworks: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
In the Chapter03
notebook, BERT_Fine_Tuning_Sentence_Classification_GPU.ipynb
, we used CUDA (Compute Unified Device Architecture) to communicate with NVIDIA GPUs. CUDA is an NVIDIA platform for general computing on GPUs. Specific instructions can be added to our source code. For more, see https://developer.nvidia.com/cuda-zone.
In the Chapter03
notebook, we used CUDA instructions to transfer our model and data to NVIDIA GPUs. PyTorch
has an instruction to specify the device we wish to use: torch.device
.
For more, see https://pytorch.org/docs...