TensorFlow's architecture has several levels of abstraction. Let's first introduce the lowest layer and find our way to the uppermost layer:
Most deep learning computations are coded in C++. To run operations on the GPU, TensorFlow uses a library developed by NVIDIA called CUDA. This is the reason you need to install CUDA if you want to exploit GPU capabilities and why you cannot use GPUs from another hardware manufacturer.
The Python low-level API then wraps the C++ sources. When you call a Python method in TensorFlow, it usually invokes C++ code behind the scenes. This wrapper layer allows users to work more quickly because Python is considered easier to use than C++ and does not require compilation. This Python wrapper makes it possible to perform extremely basic operations such as matrix multiplication and addition.
At the top sits the high-level API, made of two components—Keras...