Deep learning is a relatively new field and as such does not have multiple available methods for developers to build data models. Also, switching the deep learning models from their original framework that they are built on, to a new framework is difficult and time-consuming. This is one of the reasons why Microsoft and Facebook released ONNX in September this year.
ONNX, an open source format for the deep learning models allows machines to learn tasks without the need of being programmed explicitly. Deep learning models trained on one framework are easily transferable to another with the help of the ONNX format, with no additional work.
Apache MXNet is a fully featured and scalable deep learning framework. It offers APIs across popular languages such as Python, Scala, and R.
It consists of a Gluon Interface which allows developers of different skill levels to begin with deep learning on the cloud, mobile applications, and also on edge devices. With a few Gluon code, developers are able to build different models such as, linear regression, convolutional networks and recurrent LSTMs (Long Short Term Memory). This further helps them to carry out tasks such as Object detection, recommendation, speech recognition, and even personalization.
By providing ONNX format support for MXNet, developers can bring together the features of both ONNX and MXNet. This means, they can build and train models on various other deep learning frameworks including Microsoft Cognitive Toolkit, Caffe2, and so on. Developers can also import these models within MXNet to run them for inference using the MXNet engine, which is highly optimized and scalable.
ONNX community will continue developing the ONNX format and the ecosystem. Facebook plans to add more interoperability that can expand ONNX for MXNet functionality. Plans for getting ONNX for MXNet core APIs is also on the cards. Amid all the good news, the deep learning community is still longing for one elusive partner. Google's entry into ONNX would be an icing on the cake.