Distributed deep learning models using TensorFlow 2.x – Multi-GPU training
Deep RL utilizes a deep neural network for policy, value-function, or model representations. For higher-dimensional observation/state spaces, for example, in the case of image or image-like observations, it is typical to use convolutional neural network (CNN) architectures. While CNNs are powerful and enable training Deep RL policies for vision-based control tasks, training deep CNNs requires a lot of time, especially in the RL setting. This recipe will help you understand how we can leverage TensorFlow 2.x’s distributed training APIs to train deep residual networks (ResNets) using multiple GPUs. The recipe comes with configurable building blocks that you can use to build Deep RL components like deep policy networks or value networks.
Let’s get started!
Getting ready
To complete this recipe, you will first need to activate the tf2rl-cookbook
Python/conda virtual environment. Make...