nGraph
nGraph Library is an open-source C++ library and runtime / compiler suite for Deep Learning ecosystems. With nGraph Library, data scientists can use their preferred deep learning framework on any number of hardware architectures, for both training and inference.
Focus on Data Science, not on Machine Code
While a compiler may seem like a basic component of software workflow, its importance is amplified when considering the multiarchitecture hardware environment coupled with an increasing number of deep learning frameworks. The technical benefits of a compiler—efficient memory management, data layout abstraction, training vs. inference optimizations, multinode and multidevice scaling, and hardware-specific compounding of operations—help ensure you get the benefits of cross-platform flexibility without a loss in performance.
Freedom of Choice in Framework and Hardware
While we currently support several popular frameworks with pre-optimized deployment runtimes for training deep neural network models, you are not limited to these when choosing among frontends. Architects of any framework can use our documentation for how to compile and run a training model and design or tweak a framework to bridge directly to the nGraph Compiler.
Accelerate Model Deployment
nGraph Library can help speed the deployment of models built on workstations, in the data center, or in the cloud to devices—with much less work. With the ability to run training and inference across multiple back ends, you can dedicate hardware to part of the model while the rest runs on separate hardware, utilizing custom chips for a variety of model parts that are purpose built for each condition.
ONNX Compatibility
nGraph Library is compatible with the Open Neural Network Exchange (ONNX) format, a standard for deep learning models that supports the transfer of models between frameworks. We’ve recently begun support for the ONNX format. Developers who already have a trained DNN model can use nGraph Library to bypass significant framework-based complexity and import it to test or run on targeted and efficient backends with our user-friendly Python-based API. See the ngraph library onnx companion tool to get started.
Related Resources
Unlocking DL Performance with nGraph
The rapid growth of deep learning (DL) in large-scale, real-world applications has produced a sharp increase in the demand for…
HE-Transformer for nGraph: Enabling Deep Learning on Encrypted...
We are pleased to announce the open source release of HE-Transformer, a homomorphic encryption (HE) backend to nGraph, Intel's neural...
Investing in the PyTorch Developer Community
Software is essential to delivering on the promise of AI. Whether they are shipping production models or doing research, developers...
Adaptable Deep Learning Solutions with nGraph™ Compiler and...
The neon™ deep learning framework was created by Nervana Systems to deliver industry-leading performance. As of 2018, the neon framework...
High-performance TensorFlow* on Intel® Xeon® Using nGraph
We recently announced the open source release of nGraph™, a C++ library, compiler and runtime suite for running Deep Neural...
nGraph: A New Open Source Compiler for Deep...
The neon™ deep learning framework was created by Nervana Systems to deliver industry-leading performance. As of 2018, the neon framework...