We will show an example of data parallelism where data is split across multiple GPUs
Playing with Distributed TensorFlow: multiple GPUs and one CPU
Getting ready
This recipe is inspired by a good blog posting written by Neil Tenenholtz and available online: https://clindatsci.com/blog/2017/5/31/distributed-tensorflow
How to do it...
We proceed with the recipe as follows:
- Consider this piece of code which runs a matrix multiplication on a single GPU.
# single GPU (baseline)
import tensorflow as tf
# place the initial data on the cpu
with tf.device('/cpu:0'):
input_data...