In this chapter, we will work on an Ubuntu 16.06 computer with a Nvidia Titan X GPU.
We suggest that you use Ubuntu 14.04 or 16.06 to avoid further issues.
The choice of GPU is beyond the scope of this chapter. However, you must choose a Nvidia device with a high memory capacity in order to take full advantage of the GPU when compared to a CPU. Currently, AMD GPUs are not officially supported by TensorFlow and most other deep learning frameworks. At the time of writing, Windows can use Tensorflow with GPU on Python 3.5 or Python 3.6. However, Tensorflow dropped the support for GPU on macOS from Tensorflow 1.2. If you are using Windows, we suggest that you follow the official tutorial for Windows at the following link: https://www.tensorflow.org/install/install_windows.