Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Introducing Tile : A new machine learning language with auto generating GPU Kernels

Save for later
  • 3 min read
  • 14 Nov 2017

article-image
Recently, Vertex.AI announced a simple and compact machine learning language for its PlaidML framework. Tile is a tensor manipulation language built to bring the PlaidML framework to a wider developer audience. PlaidML is their open source and portable deep learning framework developed for deploying neural networks on any device.

A key obstacle the developers of PlaidML faced was scalability. In order for any framework to be adopted across a wide variety of platforms, software support is required. By software support we mean the implementation of software kernels which is a glue between frameworks and the underlying system.

Tile comes as a rescue here because it can automatically generate these kernels. This addresses the problem of compatibility by making it easier to add support for different NVIDIA GPUs as well as other new types of processors such as those from AMD and Intel.

Tile runs on the backend of PlaidML to produce custom kernels for each specific operation for each GPU. As these kernels are machine generated they are highly accelerated. A high acceleration leads to easily adding support for different processors.

Using Tile, machine learning operations can be methodically implemented on parallel computing architectures. It can also be easily converted into optimized GPU kernels.

Another key feature of Tile is the fact that the code is very easy to write and understand. This is because coding in Tile is similar to writing a mathematical notation. In addition to this, all machine learning operations expressed in this language can be automatically differentiated. The fact that it is so easy to understand makes it easily adoptable by both machine learning practitioners as well as software engineers and mathematicians.

This is an example for writing a Tile matrix multiply :

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
function (A[M, L], B[L, N]) -> (C) {
C[i, j: M, N] = +(A[i, k] * B[k, j]);
}

Note how closely it resembles linear algebra operations with an easy syntax. This syntax is demonstrative as well as optimized for covering all operations required to build neural networks.

PlaidML uses Tile as the intermediate language while integration with Keras. This reduces significant writing of backend Keras code. So, it gets easy to support and implement new operations such as dilated convolutions. Tile can also address and analyze issues such as cache coherency, shared memory usage, and memory bank conflicts.

According to the official blog of Vertex AI, Tile is characterized by:

  • Control-flow & side-effect free operations on n-dimensional tensors
  • Mathematically oriented syntax resembling tensor calculus
  • N-Dimensional, parametric, composable, and type-agnostic functions
  • Automatic Nth-order differentiation of all operations
  • Suitability for both JITing and pre-compilation
  • Transparent support for resizing, padding & transposition

The developers are currently working to bring the language to a formal specification. In the future, they intend to use a similar approach to make TensorFlow, PyTorch, and other frameworks compatible with PlaidML.

If you’re interested in learning how to write code in Tile, you can check the Tile tutorial on their GitHub.