Training models on any hardware using PyTorch Lightning
PyTorch Lightning (https://github.com/PyTorchLightning/pytorch-lightning) is yet another library that is built on top of PyTorch to abstract out the boilerplate code needed for model training and evaluation. A special feature of this library is that any model training code written using PyTorch Lightning can be run without changes on any hardware configuration such as multiple CPUs, multiple GPUs, or even multiple TPUs.
In the following exercise, we will train and evaluate a handwritten digit classification model using PyTorch Lightning on CPUs. You can use the same code for training on GPUs or TPUs. The full code for the following exercise can be found here: https://github.com/PacktPublishing/Mastering-PyTorch/blob/master/Chapter14/pytorch_lightning.ipynb.
Defining the model components in PyTorch Lightning
In this part of the exercise, we will demonstrate how to initialize the model
class in PyTorch Lightning. This...