Summary
At this point in the book, and in your project, you should have a fully functional data loader built, tested, and optimized on both your local notebook and your SageMaker training instances. You should have your entire dataset identified, downloaded, processed, and ready to run through your training loop. You should have done at least one full pass through your training loop with a tiny sample of your dataset – something as small as 100 samples would be fine. You should have identified how you want to send your large dataset to your SageMaker training instances, possibly by using FSx for Lustre, and you should have this built, tested, and operational. You should also know a few other ways to store and process data on AWS.
You should be very comfortable making architectural decisions that reduce your project costs, such as opting for CPU-based data downloading and processing, along with the Python multiprocessing
package to easily farm your tasks out to all available...