hidden text to trigger early load of fonts ПродукцияПродукцияПродукцияПродукция Các sản phẩmCác sản phẩmCác sản phẩmCác sản phẩm المنتجاتالمنتجاتالمنتجاتالمنتجات מוצריםמוצריםמוצריםמוצרים
Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
746 Discussions

Faster Inferencing on Intel® Hardware with Just One Extra Line of Code

MaryT_Intel
Employee
0 0 12.2K
ONNX
Authors: Vibhu Bithar, Devang Aggarwal, N Maajid Khan

Introduction to OpenVINO Execution Provider for ONNX Runtime

Did you know that with just one extra line of code you can get an almost 50% boost in the performance of your deep learning model? We were able to see a jump from 30 FPS to 47 FPS when using ONNX Tiny YOLOv2 object detection model on an i7 CPU1. It is nearly a 50% gain and it makes a substantial difference in the performance of your Deep Learning Models. 

Now, you must be wondering how just one line of code can give you that extra performance boost you were looking for. The answer is quite simple. Intel® Distribution of OpenVINO™ toolkit. The Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. OpenVINO™ Toolkit does all the magic to boost the performance of your Deep Learning models using the most advanced optimization techniques specifically optimized for Intel® hardware. 

Now that you know what OpenVINO™ Toolkit does you must be wondering how it ties up with popular AI frameworks like ONNX Runtime (RT). Developers, like yourself, can leverage the power of the Intel® Distribution of OpenVINO™ toolkit through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from AI frameworks like TensorFlow, PyTorch, Keras and much, much, more. Intel and Microsoft joined hands to create the OpenVINO Execution Provider for ONNX Runtime, which enables ONNX models for running inference using ONNX Runtime API’s while using OpenVINO™ toolkit as a backend. With the OpenVINO Execution Provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic acceleration on Intel® CPU, GPU, VPU and FPGA and best of all you can get that better performance you were looking for with just one line of code. We have seen a massive, improved performance using the OpenVINO Execution Provider on different workloads.

MicrosoftTeams-image (13).png

Figure 1: Performance of OpenVINO EP for ONNX RT2


Still not convinced? Keep reading and check out some of the samples we have created using OpenVINO Execution Provider for ONNX RT.
 

figure 2

Figure 2: Architecture Diagram for OpenVINO Execution Provider for ONNX Runtime


Samples

In order to showcase what you can do with the OpenVINO Execution Provider for ONNX Runtime, we have created a few samples that show how you can get that performance boost you’re looking for with just one additional line of code. 

Python Sample

The Object detection sample uses a Tiny YOLOv2 Deep Learning ONNX Model from the ONNX Model Zoo.

The sample involves presenting a frame-by-frame video to ONNX Runtime, which uses the OpenVINO Execution Provider to run inference on various Intel® hardware devices and perform object detection to detect up to 20 different objects like birds, buses, cars, people and much more.

Link to the sample:  Object detection with tinyYOLOv2 in Python

code 1
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • captions off, selected
      (view in My Videos)

      Demo 1: Using OpenVINO EP for ONNX RT in Python Sample


      C# Sample

      The C# sample uses a public Yolov3 Deep Learning ONNX Model from the ONNX Model Zoo. 

      The sample involves presenting an image to the ONNX Runtime (RT), which uses OpenVINO Execution Provider for ONNX RT to run inference on Intel® NCS2 stick (MYRIADX device) by performing object detection on the input image. 

      Link to the sample:  Object detection with YOLOv3 in C#
       

      code 3
      Video Player is loading.
      Current Time 0:00
      Duration 0:00
      Loaded: 0%
      Stream Type LIVE
      Remaining Time 0:00
       
      1x
        • Chapters
        • descriptions off, selected
        • captions off, selected
          (view in My Videos)

          Demo 2: Using OpenVINO EP for ONNX RT in C# Sample

          CPP Sample

          The CPP sample uses a public SqueezeNet Deep Learning ONNX Model from the ONNX Model Zoo. 

          The sample involves presenting an image to ONNX Runtime, which uses the OpenVINO Execution Provider to run inference on various Intel® hardware devices. The sample uses OpenCV for image processing. After the sample image is inferred, the terminal will output the predicted label classes in order of their confidence.

          The implementation should be compatible with most of the ImageNet classification neural networks and images from other sources with slight modifications. In addition, we will also be comparing the inference latencies measured from the default CPU Execution Provider and our OpenVINO Execution Provider .

          Link to the sample:  Image classification with Squeezenet in CPP
           

          code 3
          Video Player is loading.
          Current Time 0:00
          Duration 0:00
          Loaded: 0%
          Stream Type LIVE
          Remaining Time 0:00
           
          1x
            • Chapters
            • descriptions off, selected
            • captions off, selected
              (view in My Videos)

              Demo 3 : Using OpenVINO EP for ONNX RT in CPP Sample

               

              Notes

              1 Processor:                    
                 Intel(R) Core(TM) i7-7700T CPU @ 2.90GHz,
                 Core(s) per socket: 4, Thread(s) per core: 2
                 Graphics:                        
                 Intel HD Graphics 630, clock: 33MHz
                 Memory:                        
                 8192 MB, Type: DDR4
                 Bios Version:                
                 V2RMAR17, Vendor: American Megatrends Inc.
                 OS:                               
                 Name: Ubuntu, Version: 18.04.5 LTS
                 System Information:  
                 Manufacturer: iEi, Mfr. No: TANK-870AI-i7/8G/2A-R11,
                 Product Name: SER0

                                                        Version: V1.0

                 Microcode:                  
                 0xde
                 Framework configuration:
                 ONNXRuntime 1.7.0, OpenVINO 2021.3 Binary Release,
                 Build Type: Release Mode
                 Application configuration:
                 ONNXRuntime Python API, EP: OpenVINO, Default CPU,
                 Input: Video file
                 Application Metric:     
                 Frames per second (FPS):  (1.0 / Time Taken to run one
                 ONNXRuntime session)
                 Test Date:                    
                 May 14, 2021
                 Tested by:                    
                 Intel

              2 Processor:                    
                 Intel(R) Core(TM) i7-7700T CPU @ 2.90GHz,
                 Core(s) per socket: 4, Thread(s) per core: 2
                 Graphics:                        
                 Intel HD Graphics 630, clock: 33MHz
                 Memory:                        
                 8192 MB, Type: DDR4
                 Bios Version:                
                 V2RMAR17, Vendor: American Megatrends Inc.
                 OS:                               
                 Name: Ubuntu, Version: 18.04.5 LTS
                 System Information:  
                 Manufacturer: iEi, Mfr. No: TANK-870AI-i7/8G/2A-R11,
                 Product Name:  SER0,

                                                        Version: V1.0

                 Microcode:                  
                 0xde
                 Framework configuration:
                 ONNXRuntime 1.7.0, OpenVINO 2021.3 Binary Release,
                 Build Type: Release Mode
                 Compiler version:        
                 gcc version: 7.5.0
                 Application configuration:
                 onnxruntime_perf_test, No.of Infer Request's: 1000, EP: OpenVINO 
                 Number of sessions: 1
                 Test Date:                      
                 May 14, 2021
                 Tested by:                      
                 Intel
               


              Notices & Disclaimers

              Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.  

              Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details.  No product or component can be absolutely secure.  

              Your costs and results may vary.  

              Intel technologies may require enabled hardware, software or service activation.

              © Intel Corporation.  Intel, the Intel logo, OpenVINO and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  Other names and brands may be claimed as the property of others.  
               

              About the Author
              Mary is the Community Manager for this site. She likes to bike, and do college and career coaching for high school students in her spare time.