Microsoft Azure* and ONNX* Runtime for the Intel® Distribution of OpenVINO™ Toolkit
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Unlock insights to deploy AI models with streamlined training to inference using the Intel® Distribution of OpenVINO™ toolkit, Microsoft Azure*, and Open Neural Network Exchange (ONNX*) Runtime.
Tune in to hear Intel product experts Savitha Gandikota and Arindam Paul, and Microsoft* principal program manager Manash Goswami discuss how to train on Microsoft Azure*, streamline on ONNX Runtime, and infer on the Intel Distribution of OpenVINO toolkit to accelerate the time to production. With ready-to-use apps available on the Microsoft Azure marketplace, take advantage of the power of a streamlined train-to-deployment pipeline.
In this webinar, you can:
- Get an overview how to accelerate train-to-deploy workflows
- See relevant demonstrations
- Learn how to use these applications
Get Started
- Download the latest version of the Intel® Distribution of OpenVINO™ toolkit
- Read the article: Intel and Microsoft to Empower Developers to Deploy AI
Savitha Gandikota
Edge-to-cloud solutions product manager, technical business leader, Intel Corporation
Savitha drives edge AI. She brings a unique blend of expertise with hardware and software architectures and technologies through her experiences from server, networking, and embedded industries. Her passion for building products from the ground up keeps her busy in driving core capabilities needed for the edge computing revolution. Savitha believes that disruption due to AI is here and building scalable edge-to-cloud solutions is the key to success.
Arindam Paul
Product manager, Intel Corporation
Arindam is a veteran in the technology industry. He has led teams in Dell EMC*, Cisco*, Akamai*, and Brocade* to market leading innovations. Insanity* workouts keep him hungry and technology innovations keep him foolish.
Manash Goswami
Principal program manager in the AI Frameworks team, Microsoft Corporation
Manash is responsible for defining the strategy for integrating hardware platforms to enable running models for machine learning with the ONNX Runtime and enabling inference solutions in mobile and IoT platforms with ONNX Runtime.
Optimize models trained using popular frameworks like TensorFlow*, PyTorch*, and Caffe*, and deploy across a mix of Intel® hardware and environments.