Article ID: 000058759 Content Type: Troubleshooting Last Reviewed: 09/06/2022

Conversion of INT8 Models to Intermediate Representation (IR)

BUILT IN - ARTICLE INTRO SECOND COMPONENT
Summary

Model Optimization Flow with OpenVINO

Description

In the last paragraph of the Low Precision Optimization Guide, quantization-aware training is mentioned. It says this allows a user to get an accurate optimized model that can be converted to IR. However, no other details are provided.

Resolution

Quantization-Aware Training, using OpenVINO™ compatible training frameworks, supports models written on TensorFlow QAT or PyTorch NNCF, with optimization extensions.

The NNCF is a PyTorch-based framework that supports a wide range of Deep Learning models for various use cases. It also implements quantization-aware training supporting different quantization modes and settings, and supports various compression algorithms, including Quantization, Binarization, Sparsity, and Filter Pruning.

When fine-tuning finishes, the accurate optimized model can be exported to ONNX format, which can then be used by Model Optimizer to generate Intermediate Representation (IR) files and subsequently inferred with OpenVINO™ Inference Engine.

Related Products

This article applies to 2 products