Summary
This chapter presented some common practices for enhancing and improving your model building and training processes. One of the most common issues in dealing with training data handling is to stream or fetch training data in an efficient and scalable manner. In this chapter, you have seen two methods to help you build such an ingestion pipeline: generators and datasets. Each has its strengths and purposes. Generators manage data transformation and batching quite well, while a dataset API is designed where a TPU is the target.
We also learned how to implement various regularization techniques using the traditional L1 and L2 regularization, as well as a modern regularization technique known as adversarial regularization, which is applicable to image classification. Adversarial regularization also manages data transformation and augmentation on your behalf to save you the effort of generating noisy images. These new APIs and capabilities enhance TensorFlow Enterprise's...