Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others

Save for later
  • 3 min read
  • 14 Sep 2018

article-image

Yesterday, Facebook announced that Cadence, Esperanto, Intel, Marvell, and Qualcomm Technologies Inc, have committed to support their Glow compiler in future silicon products. Facebook, with this partnership aims to build a hardware ecosystem for machine learning.

With Glow, their partners will be able to rapidly design and optimize new silicon products for AI and ML and help Facebook scale their platform. They are also planning to expand this ecosystem by adding more partners in 2018.

What is Glow?


Glow is a machine learning compiler which is used to speed up the performance of deep learning frameworks on different hardware platforms. The name “Glow” comes from Graph-Lowering, which is the main method that the compiler uses for generating efficient code.

This compiler is designed to allow state-of-the-art compiler optimizations and code generation of neural network graphs. With Glow, hardware developers and researchers can focus on building next generation hardware accelerators that can be supported by deep learning frameworks like PyTorch. Hardware accelerators for ML solve a range of distinct problems. Some focus on inference, while others focus on training.

How it works?


Glow accepts a computation graph from deep learning frameworks such as, PyTorch and TensorFlow and generates highly optimized code for machine learning accelerators.

To do so, it lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation:

facebooks-glow-a-machine-learning-compiler-to-be-supported-by-intel-qualcomm-and-others-img-0

Source: Facebook


Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
  • High-level intermediate representation allows the optimizer to perform domain-specific optimizations.
  • Lower-level intermediate representation, an instruction-based address-only representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation, and copy elimination.
  • The optimizer then performs machine-specific code generation to take advantage of specialized hardware features.


Glow supports a high number of input operators as well as a large number of hardware targets with the help of its lowering phase, which eliminates the need to implement all operators on all targets. The lowering phase reduces the input space and allows new hardware backends to focus on a small number of linear algebra primitives.

You can read more about Facebook’s goals for Glow in its official announcement. If you are interesting in knowing how it works in more detail, check out this research paper and also its GitHub repository.

Facebook launches LogDevice: An open source distributed data store designed for logs

Google’s new What-if tool to analyze Machine Learning models and assess fairness without any coding

Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN