Using Multiple Executors
It should be apparent to the reader that there are many features of TensorFlow and computational graphs that lend itself naturally to being computed in parallel. The computational graph can be broken up on different processors as well as processing different batches. We will address how to access different processors on the same machine in this recipe.
Getting ready
For this recipe, we will show how to access multiple devices on the same system and train on them. This is a very common occurrence, as along with a CPU, a machine may have one or more GPUs that can share the computationl load. If TensorFlow can access these devices, it will automatically distribute the computations to the multiple devices via a greedy process. But TensorFlow also allows the program to specify which operations will be on which devices via namescope placement.
In order to access GPU devices, the GPU version of TensorFlow must be installed. To install the GPU version of TensorFlow, visit https...