Accelerated linear algebra (XLA) is a domain-specific compiler for linear algebra. According to https://www.tensorflow.org/performance/xla/, it is still in the experimental stage and is used to optimize TensorFlow computations. It can provide improvements in execution speed, memory usage, and portability on the server and mobile platforms. It provides two-way JIT (Just In Time) compilation or AoT (Ahead of Time) compilation. Using XLA, you can produce platform-dependent binary files (for a large number of platforms like x64, ARM, and so on), which can be optimized for both memory and speed.
Using XLA to enhance computational performance
Getting ready
At present, XLA is not included in the binary distributions of TensorFlow. One needs to build it from source. To build TensorFlow from source, knowledge of LLVM and Bazel along with TensorFlow is required. TensorFlow.org supports building from source in only MacOS and Ubuntu. The steps needed to build TensorFlow from the source are as follows (https://www.tensorflow.org/install/install_sources):
- Determine which TensorFlow you want to install--TensorFlow with CPU support only or TensorFlow with GPU support.
- Clone the TensorFlow repository:
git clone https://github.com/tensorflow/tensorflow
cd tensorflow
git checkout Branch #where Branch is the desired branch
- Install the following dependencies:
-
- Bazel
- TensorFlow Python dependencies
- For the GPU version, NVIDIA packages to support TensorFlow
- Configure the installation. In this step, you need to choose different options such as XLA, Cuda support, Verbs, and so on:
./configure
- Next, use bazel-build:
- For CPU only version you use:
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
- If you have a compatible GPU device and you want the GPU Support, then use:
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
- On a successful run, you will get a script, build_pip_package.
- Run this script as follows to build the whl file:
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
- Install the pip package:
sudo pip install /tmp/tensorflow_pkg/tensorflow-1.1.0-py2-none-any.whl
Now you are ready to go.
How to do it...
TensorFlow generates TensorFlow graphs. With the help of XLA, it is possible to run the TensorFlow graphs on any new kind of device.
- JIT Compilation: This is to turn on JIT compilation at session level:
# Config to turn on JIT compilation
config = tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
sess = tf.Session(config=config)
- This is to turn on JIT compilation manually:
jit_scope = tf.contrib.compiler.jit.experimental_jit_scope
x = tf.placeholder(np.float32)
with jit_scope():
y = tf.add(x, x) # The "add" will be compiled with XLA.
- We can also run computations via XLA by placing the operator on a specific XLA device XLA_CPU or XLA_GPU:
with tf.device \ ("/job:localhost/replica:0/task:0/device:XLA_GPU:0"):
output = tf.add(input1, input2)
AoT Compilation: Here, we use tfcompile as standalone to convert TensorFlow graphs into executable code for different devices (mobile).
TensorFlow.org tells about tfcompile:
For advanced steps to do the same, you can refer to https://www.tensorflow.org/performance/xla/tfcompile.