In this chapter, we've explored how you can train your neural networks with both small and large datasets. For smaller datasets, we've looked at how you can quickly train a model by calling the train method on the loss function. For larger datasets, we've explored how you can use both MinibatchSource and a manual minibatch loop to train your network.
Using the right method of training can make a big difference in how long it takes to train your model and how good your model will be in the end. You can now make an informed choice between using in-memory data and reading data in chunks. Make sure you experiment with the minibatch size settings to see what works best for your model.
Up until this chapter, we haven't looked at methods to monitor your model. We did see some fragments with a progress writer to help you visualize the training process. But...