A key obstacle the developers of PlaidML faced was scalability. In order for any framework to be adopted across a wide variety of platforms, software support is required. By software support we mean the implementation of software kernels which is a glue between frameworks and the underlying system.
Tile comes as a rescue here because it can automatically generate these kernels. This addresses the problem of compatibility by making it easier to add support for different NVIDIA GPUs as well as other new types of processors such as those from AMD and Intel.
Tile runs on the backend of PlaidML to produce custom kernels for each specific operation for each GPU. As these kernels are machine generated they are highly accelerated. A high acceleration leads to easily adding support for different processors.
Using Tile, machine learning operations can be methodically implemented on parallel computing architectures. It can also be easily converted into optimized GPU kernels.
Another key feature of Tile is the fact that the code is very easy to write and understand. This is because coding in Tile is similar to writing a mathematical notation. In addition to this, all machine learning operations expressed in this language can be automatically differentiated. The fact that it is so easy to understand makes it easily adoptable by both machine learning practitioners as well as software engineers and mathematicians.
This is an example for writing a Tile matrix multiply :
function (A[M, L], B[L, N]) -> (C) { C[i, j: M, N] = +(A[i, k] * B[k, j]); } |
Note how closely it resembles linear algebra operations with an easy syntax. This syntax is demonstrative as well as optimized for covering all operations required to build neural networks.
PlaidML uses Tile as the intermediate language while integration with Keras. This reduces significant writing of backend Keras code. So, it gets easy to support and implement new operations such as dilated convolutions. Tile can also address and analyze issues such as cache coherency, shared memory usage, and memory bank conflicts.
According to the official blog of Vertex AI, Tile is characterized by:
|
The developers are currently working to bring the language to a formal specification. In the future, they intend to use a similar approach to make TensorFlow, PyTorch, and other frameworks compatible with PlaidML.
If you’re interested in learning how to write code in Tile, you can check the Tile tutorial on their GitHub.