Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

OpenAI announces Block sparse GPU kernels for accelerating neural networks

Save for later
  • 3 min read
  • 08 Dec 2017

article-image
OpenAI, an Artificial intelligence research firm, brings in a wave of faster GPUs with their new GPU kernels, Block-Sparse GPU Kernels--software programs optimized to build sparse networks on Nvidia’s hardware chips.

These help in building faster yet efficient neural networks. Also, it won’t eat up much of memory space on your GPUs.

Neural networks are a complex branch of AI and are built using layers of connected nodes. However, their processing power is restricted by the architecture of the GPUs that they run on. Due to which, neural networks lack the presence of an efficient GPU implementation for sparse linear operations.

Researchers at OpenAI say that it is now possible to make neural networks highly efficient by bringing in sparse matrices into their design.

How sparse matrix helps GPUs

A sparse matrix is simply a mathematical matrix filled in with multiple entries of value zero. Such zero-valued elements can be easily compresses and detoured within matrix multiplications, which in turn saves computation time and also takes up lesser memory on GPUs.

openai-block-sparse-kernels-accelerating-neural-networks-img-0Source: https://blog.openai.com/block-sparse-gpu-kernels/

The saved computational power can be later on used to train deep neural networks more efficiently. This means, neural networks can multi-task by performing inference, and running algorithms simultaneously, that too 10 times faster than the regular matrices.

The problem that OpenAi face with these sparse matrix is, Nvidia, the biggest name in the manufacturing of GPUs for neural networks does not have  a support for sparse matrix models within its hardware.

Enter Block sparse GPU kernels...

Block sparse GPU kernels: Sparse matrix gets an upgrade

To overcome the problem with sparsity within the Nvidia hardware, a team of researchers at OpenAI developed Block sparse GPU kernels.

openai-block-sparse-kernels-accelerating-neural-networks-img-1Source:  https://blog.openai.com/block-sparse-gpu-kernels/

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

Key points to note about block sparse GPU kernels:

  • They are written in Nvidia’s CUDA programming language.
  • At present, they are only compatible with TensorFlow
  • Also, they only support Nvidia’s GPUs.

OpenAI also declared that it is sharing its block sparse GPU kernels with the wider research community in order to put it to use in other developments. Also, these kernels would be expanded to support other hardware and frameworks.

OpenAI used the neural network enhanced with the block sparse GPU kernels, to carry out sentiment analysis on the reviews for IMDB and Amazon. The result was, these sparse models won over the dense models on all sentiment datasets.

openai-block-sparse-kernels-accelerating-neural-networks-img-2Source: https://s3-us-west-2.amazonaws.com/openai-assets/blocksparse/blocksparsepaper.pdf

OpenAI also mentioned that their sparse model improved at a state-of-the-art level on the IMDB dataset from 5.91% error to 5.01%. They say it has been a promising improvement over their previous results, which performed extremely well on shorter sentence level datasets.

As these new kernels seem very promising, the OpenAI research team does not have an ultimate view on when and where these kernels would help. The community promises to explore this space further.

To learn how to install and develop Block sparse GPU kernels, click on the GitHub link here.