Optimized ONNX* Models Run on AI PCs
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Optimizing a diverse network composed of heterogeneous components can be simplified substantially by combining OpenVINO™ toolkit optimizations to improve Open Neural Network Exchange (ONNX*) models. ONNX offers numerous benefits to developers, delivering a common infrastructure for supporting machine learning and providing standardized operators and a common format. For Intel systems that include a mix of CPUs, integrated GPUs, and NPUs, model inferencing is streamlined, as this session demonstrates.
Using OpenVINO toolkit as a back end, models can be inferenced and deployed with the ONNX Runtime APIs. This session shows the performance gains achieved through the simple process of using the OpenVINO Execution Provider on AI PC and evaluating the results.
Topics covered include:
- Learn the characteristics of an AI PC and the benefits these systems offer developers.
- Understand the techniques for inferencing and deploying ONNX models on an AI PC.
- Evaluate the performance of ONNX models on AI PC systems with a combination of OpenVINO toolkit, ONNX, and OpenVINO Execution Provider for ONNX Runtime.
- Learn how to build a stand-alone app for an AI PC with OpenVINO Execution Provider for ONNX Runtime.
Skill level: All levels
Featured Software
Download the following resources: