Summary
In this final chapter of the book, we focused on both abstracting out the noisy details involved in model training code and the core components to facilitate the rapid prototyping of models. As PyTorch code can often be cluttered with a lot of such noisy detailed code components, we looked at some of the high-level libraries that are built on top of PyTorch.
First, we explored fast.ai, which enables PyTorch models to be trained in fewer than 10 lines of code. In the form of an exercise, we demonstrated the effectiveness of training a handwritten digit classification model using fast.ai. We used one of fast.ai's modules to load the dataset, another module to train and evaluate a model, and—finally—another module to interpret the trained model behavior.
Next, we looked at PyTorch Lightning, which is another high-level library built on top of PyTorch. We did a similar exercise of training a handwritten digit classifier. We demonstrated the code layout...