Summary
In this chapter, we have explored where the flexibility of DL comes from. DL uses a network of mathematical neurons to learn the hidden patterns within a set of data. Training a network involves the iterative process of updating model parameters based on a train set and selecting the model that performs the best on a validation set, with the goal of producing the best performance on a test set.
Realizing the repeated processes within model training, many engineers and researchers have put together common building blocks into frameworks. We have described two of the most popular frameworks: PyTorch and TF. The two frameworks are structured in a similar way, allowing users to set up the model training using three building blocks: data loading logic, model definition, and model training logic. As the final topic of the chapter, we decomposed StyleGAN, one of the most popular GAN implementations, to understand how the building blocks are used in reality.
As DL requires a...