Building and testing your own data loader – a case study from Stable Diffusion
The syntax for data loaders is guaranteed to change, so I don’t want to rely on PyTorch’s current implementation too heavily. However, let me provide you with one simple screenshot:
Figure 6.3 – Using data loaders in PyTorch
This is actually from my re:Invent demo on large-scale training in 2022, with Gal Oshri from SageMaker and Dan Padnos from AI21: https://medium.com/@emilywebber/how-i-trained-10tb-for-stable-diffusion-on-sagemaker-39dcea49ce32. Here, I’m training Stable Diffusion on 10 TB of data, using SageMaker and FSx for Lustre, which is a distributed file system built for high-performance computing. More on that and related optimizations later in the chapter!
As you can see, really the only hard part about this is building the input training dataset. Once you have a valid dataset object, getting a valid data loader is as simple...