Intel open sources nGraph – Now Focus on Data Science and Stop Worrying about Frameworks and Hardware

Pranav Dar 20 Apr, 2018 • 3 min read


  • Intel has open sourced the code for nGraph, a framework-neutral deep neural network model compiler
  • With nGraph, data scientists can finally prioritize data science tasks, rather than worrying about how their code will work on a different framework or device
  • It support 6 frameworks currently, including TensorFlow and PyTorch
  • Check out the article below to understand how to use it



Code portability is one of the more challenging features of a data scientist’s work. How awesome would it be if you didn’t have to worry about what library or framework you’re using and will it be available on another framework? Will your code run on a machine with significantly different configuration? These are the current gaps in the industry.

Intel has stepped into this space and open sourced nGraph to reduce these complexities and tedious processes of adjusting models to different frameworks. nGraph is a framework-neutral deep neural network (DNN) model compiler that has the potential of targeting a wide variety of devices.

This allows data scientists to focus on the data science aspect of their projects (for example developers working on TensorFlow or PyTorch), rather than spending (or wasting) time on wondering how in the world their deep neural network will train and run if ported to a different device.

So how does it work exactly? You start by installing the nGraph library. Then, you write a framework with this library in order to train your deep learning model. The next step is crucial – you need to specify nGraph as the backend framework from the command line on any supported system.

Then Intel takes over. It’s Intermediate Representation (IR) layer takes care of all the device details and lets data scientists focus on their algorithms, approaches and models. Sounds perfect, doesn’t it?

The below image, taken from Intel’s blog post, describes the nGraph ecosystem:

As of today, nGraph supports three deep learning compute devices and six deep learning frameworks. Refer to the below table for clarity:

Framework bridge available? ONNX support?
neon yes yes
MXNet yes yes
TensorFlow yes yes
PyTorch not yet yes
CNTK not yet yes
Caffe2 not yet yes


The below graph shows the comparison between the different frameworks. nGraph is able to easily better previously optimized frameworks.

Intel has said it’ll keep adding other frameworks and devices to this list in the coming months. You can read in detail about this technology in Intel’s research paper here and access the open source code on GitHub here.

In summary, nGraph has the below advantages:

  • provides data scientists the freedom of choice in frameworks and hardware.
  • allows support for multiple deep learning frameworks
  • optimizes models for multiple hardware solutions
  • allows framework owners to add unique features with much less work
  • allows cloud service providers address a larger market demand with ease
  • helps enterprises maintain a consistent experience across frameworks and back ends


Our take on this

Why is this important? Because it allows data scientists freedom of choice in frameworks and hardware. If you’ve worked with limited computation resources, or trouble with code portability between machines, you will appreciate this latest release from Intel.

It’s ability to let framework owners add features without much effort works for it. It also lets cloud service providers to manage the demand in the market.

What do you think of this latest research? I encourage you to go through the paper and the code to gain a deeper appreciation for Intel’s latest effort.


Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!


Pranav Dar 20 Apr 2018

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers


  • [tta_listen_btn class="listen"]