New and Changed in 2022.3.2 LTS
This is a Long-Term Support (LTS) release. LTS versions are released every year and supported for two years. Read Intel® Distribution of OpenVINO™ toolkit Release Policy for more details.
Major Features and Improvements Summary
- This 2022.3.2 LTS release provides functional and security bug fixes for the previous 2022.3.1 Long-Term Support (LTS) release, enabling developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit more efficiently.
- Intel® Movidius™ VPU-based products are supported in this release.
OpenVINO™ Runtime (previously known as Inference Engine)
HDDL plugin
- HDDL plugin dependencies on debug version DLL have been fixed.
CPU
- A performance issue with small networks has been fixed.
- An occasional memory leak caused by create/inference calls in separate threads on Windows has been fixed.
- The issue related to loading the plugin from a safe location has been resolved.
Distribution (where to download release)
The OpenVINO product selector tool (available at www.openvino.ai) provides the easiest access to the right packages that match your desired tools/runtime, OS, version and distribution options.
- This 2022.3.2 LTS release is available via the following distribution channels:
- pypi.org: https://pypi.org/project/openvino-dev/
- DockerHub* https://hub.docker.com/u/openvino
- Release Archives on S3 storage (specifically for C++): https://storage.openvinotoolkit.org/repositories/openvino/packages/
- APT & YUM
Known Issues
Jira ID |
Description |
Component |
Workaround |
|
1 |
24101 |
Performance and memory consumption may degrade if layers are not 64-bytes aligned. |
GNA plugin |
Try to avoid the layers that are not 64-bytes aligned to make a model GNA-friendly. |
2 |
33132 |
[IE CLDNN] Accuracy and last-tensor checks regressions for FP32 models on ICLU GPU. |
clDNN Plugin |
|
3 |
42203 |
Customers located in the People’s Republic of China (PRC) may experience issues with downloading the content from the new storage https://storage.openvinotoolkit.org/ due to PRC firewall restrictions. |
OMZ |
Please use a branch https://github.com/openvinotoolkit/open_model_zoo/tree/release-01org with links to the old storage download.01.org |
4 |
24757 |
Heterogeneous execution mode is not supported for Intel® GNA. |
GNA Plugin |
Split the model to run unsupported layers on the CPU. |
5 |
58806 |
For models that have been optimized with the Post-Optimization Training Tool (POT), the memory consumption and performance may be worse than for the original models (i.e., using an internal quantization algorithm). |
GNA Plugin |
It is not necessary to use POT if the accuracy is satisfactory. |
6 |
78822 |
The GNA plugin overhead may be unexpectedly high. |
GNA Plugin |
N/A |
7 |
80699 |
LSTM sequence models are implemented using a tensor iterator. The solution is used to improve FIP and the required memory. Performance degradations are expected. |
GPU Plugin |
LTSM sequence is processed via a tensor iterator which results in first inference latency improvements and memory usage decreases. Some performance degradation is expected. |
8 |
84812 |
HDDL: benchmark app fails when receiving precompiled .blob file as an input model. |
IE Samples |
|
9 |
86683 |
App fails at inference after one month of operation. |
OpenCL driver |
Update to the latest OpenCL driver. |
New and Changed in 2022.3.1 LTS
This is a Long-Term Support (LTS) release. LTS versions are released every year and supported for two years (one year for bug fixes and two years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Release Policy for more details.
Major Features and Improvements Summary
- This 2022.3.1 LTS release provides functional bug fixes and minor capability changes for the previous 2022.3 Long-Term Support (LTS) release, enabling developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence.
- Intel® Movidius™ VPU-based products are supported in this release.
OpenVINO™ Runtime (previously known as Inference Engine)
Overall updates
- Fixed modelPath null string issue causing inconsistent results in the calculateFileInfo function.
MYRIAD plugin
- Fixed the performance degradation issue caused by simultaneous usage of two or more VPUs.
- The MX R20 MDK Firmware has been upgraded, and it is now compatible with the OV 2022.3.1
- Fixed an issue with the face-detection-0205 and 0206 models failing on VPU with the error "Const layer Constant_23182" due to incorrect dimensions in the output data.
- Fixed an issue where the application would hang when the VPU reset by the HDDL daemon failed.
- Fixed an issue with discrepancies in CPU and MYRIAD output.
HDDL plugin
- Fixed an issue with the HDDL daemon not starting on OpenVINO 2022.2
- Fixed an issue with the HDDL plugin failing on Windows under high load in certain cases.
GPU
- Fixed shape infer for 0d broadcast on GPU.
- Optimized memory consumption.
- Fixed an issue with the exception that occurred while deserializing models with SCALAR layout input or output.
Distribution (where to download release)
The OpenVINO product selector tool (available at www.openvino.ai) provides the easiest access to the right packages that match your desired tools/runtime, OS, version and distribution options.
- This 2022.3.1 LTS release is available via the following distribution channels:
- pypi.org: https://pypi.org/project/openvino-dev/2022.3.1/
- DockerHub* https://hub.docker.com/u/openvino
- Release Archives on S3 storage (specifically for C++): https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3.1/
- APT & YUM
Known Issues
Jira ID |
Description |
Component |
Workaround |
|
1 |
24101 |
Performance and memory consumption may be bad if layers are not 64-bytes aligned. |
GNA plugin |
Try to avoid the layers that are not 64-bytes aligned to make a model GNA-friendly. |
2 |
33132 |
[IE CLDNN] Accuracy and last-tensor checks regressions for FP32 models on ICLU GPU |
clDNN Plugin |
|
3 |
42203 |
Customers from People’s Republic of China may experience some issues with downloading content from the new storage https://storage.openvinotoolkit.org/ due to the China firewall |
OMZ |
Please use a branch https://github.com/openvinotoolkit/open_model_zoo/tree/release-01org with links to old storage download.01.org |
4 |
24757 |
The heterogeneous mode does not work for GNA |
GNA Plugin |
Split the model to run unsupported layers on CPU |
5 |
58806 |
For models after POT, the memory consumption and performance may be worse than for original models (i.e., using an internal quantization algorithm) |
GNA Plugin |
Do not use POT if the accuracy is satisfying |
6 |
78822 |
The GNA plugin overhead may be unexpectedly great |
GNA Plugin |
N/A |
7 |
80699 |
LSTM sequence models are implemented using a tensor iterator. The solution is used to improve FIP and required memory. Performance degradations are expected |
GPU Plugin |
LTSM sequence is processed via a tensor iterator which resulted in first inference latency improvements and memory usage decrease. Some performance degradations are expected |
8 |
54460 |
Online package install issue in PRC Region |
Install |
In case of connectivity issues at installation, PRC customers are advised to use local repositories mirrors, such as https://developer.aliyun.com/mirror/. |
9 |
84812 |
HDDL: benchmark app fails when receiving precompiled .blob file as input model |
IE Samples |
|
10 |
86683 |
App fails at inference after 1 month of operation |
OpenCL driver |
Try to update to the latest OpenCL driver |
New and Changed in 2022.3 LTS
This is a Long-Term Support (LTS) release. LTS releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy v.2 to get details.
Major Features and Improvements Summary
- 2022.3 LTS release provides functional bug fixes, and capability changes for the previous 2022.2 release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability and higher inferencing performance with less code changes.
- Broader model and hardware support – Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.
- Full support for 4th Generation Intel® Xeon® Scalable processor family (code name Sapphire Rapids) for deep learning inferencing workloads from edge to cloud.
- Full support for Intel’s discrete graphics cards, such as Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads.
- Improved performance when leveraging throughput hint on CPU plugin for 12th and 13th Generation Intel® Core™ processor family (code named Alder Lake and Raptor Lake).
- Enhanced “Cumulative throughput” and selection of compute modes added to AUTO functionality, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
- Expanded model coverage - Optimize & deploy with ease across an expanded range of deep learning models.
- Broader support for NLP models and use cases like text to speech and voice recognition.
- Continued performance enhancements for computer vision models Including StyleGAN2, Stable Diffusion, PyTorch RAFT and YOLOv7.
- Significant quality and model performance improvements on Intel GPUs compared to the previous OpenVINO toolkit release.
- New Jupyter notebook tutorials for Stable Diffusion text-to-image generation, YOLOv7 optimization and 3D Point Cloud Segmentation.
- Improved API and More Integrations – Easier to adopt and maintain code. Requires fewer code changes, aligns better with frameworks, & minimizes conversion
- Preview of TensorFlow Front End – Load TensorFlow models directly into OpenVINO Runtime and easily export OpenVINO IR format without offline conversion. New “–use_new_frontend” flag enables this preview – see further details below in Model Optimizer section of release notes.
- NEW: Hugging Face Optimum Intel – Gain the performance benefits of OpenVINO (including NNCF) when using Hugging Face Transformers. Initial release supports PyTorch models.
- Intel® oneAPI Deep Neural Network Library (oneDNN) has been updated to 2.7 for further refinements and significant improvements in performance for the latest Intel CPU and GPU processors.
- Introducing C API 2.0, to support new features introduced in OpenVINO API 2.0, such as dynamic shapes with CPU, pre-processing and post-process API, unified property definition and usage. The new C API 2.0 shares the same library files as the 1.0 API, but with a different header file.
- Note: Intel® Movidius ™ VPU based products are not supported in this release, but will be added back in a future OpenVINO 2022.3.1 LTS update. In the meantime, for support on those products please use OpenVINO 2022.1.
-
Note: Macintosh* computers using the M1* processor can now install OpenVINO and use the OpenVINO ARM* Device Plug-in on OpenVINO 2022.3 LTS and later. This plugin is community supported; no support is provided by Intel and it doesn't fall under the LTS 2-year support policy. Learn more here: https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html
Support Change and Deprecation Notices
- OpenVINO runtime Movidius™ VPU deprecation notice:
- Intel® Neural Compute Stick 2
- Intel®Vision Accelerator Design with Intel® Movidius™ VPUs (HDDL)
- Compile tool is deprecated and approach with manual adding of preprocessing steps into model, saving to IR file is recommended. The tool will be removed in 2023.0 release.
2022.x.x LTS (use OpenVINO 2022.1 until 2022.3.1 LTS is released) | 2023.0 Release (~Q1' 2023) | |
OpenVINO Runtime support changes |
|
OpenVINO Runtime will not support listed devices |
- OpenVINO C++/C/Python 1.0 APIs
- These will be deprecated in the 2023.1 release. To avoid disruption, please migrate to OpenVINO 2.0 API. Read the transition guide for more information about the migration to API 2.0. OpenVINO API 1.0 will no longer be available in the 2024.0 release.
- OpenVINO Development tools support change notices:
- While the cloud instance of DL Workbench on Developer Cloud for the Edge will continue to be developed and maintained, the locally installed version will be deprecated in this release and moved to critical-bug-fix only mode. This is in recognition of the significantly higher usage and capabilities of the Developer Cloud version used to graphically test performance on a range of Intel hardware. It is recommended for the latest capabilities that developers using the local on-machine version migrate to the edition on Developer Cloud for the Edge.
- Open Model Zoo (OMZ) as a source of models is moving to maintenance mode and public models will no longer be added to OMZ. Moving forward, OpenVINO Notebooks tutorials will demonstrate model conversion and optimization for popular public models – including the full pipeline of downloading, converting, quantizing and deploying inference. OMZ demos will cover difficult-to-implement use cases providing guidance for specific deep learning inference scenarios. External contributions to Intel pre-trained models in OpenVINO IR format will continue to be accepted. However, no additional models will be created and published by Intel.
- Post-training Optimization Tool (POT) and Neural Networks Compression Framework (NNCF) will be consolidated into one tool. Starting in OpenVINO 2023.0 next year, NNCF will become the recommended tool for post-training quantization and quantization-aware training. POT will be deprecated but remain supported during the 2023 OpenVINO releases.
- Changes to system requirements
- Python 3.10 is now supported, support for Python 3.6 is removed.
- Preview support for Ubuntu 22.04 begins, while support for Ubuntu 18.04 will be removed in the upcoming 2023.0 release.
- macOS 12 is now supported, preview support for macOS 13 begins in 2023.0, and support for macOS 10.15 will be deprecated in 2023.0.
System Requirements
Disclaimer. Certain hardware (including but not limited to GPU and GNA) requires manual installation of specific drivers to work correctly. Drivers might require updates to your operating system, including Linux kernel, please refer to their documentation. Operating system updates should be handled by the user and are not part of OpenVINO installation.
Intel CPU processors with corresponding operating systems
Intel Atom ® processor with Intel® SSE4.2 support
Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
6th - 13th generation Intel® Core™ processors
Intel® Xeon® Scalable Processors (formerly Skylake)
2nd Generation Intel® Xeon® Scalable Processors (formerly Cascade Lake)
3rd Generation Intel® Xeon® Scalable Processors (formerly Cooper Lake and Ice Lake)
4th Generation Intel® Xeon® Scalable Processors (formerly Sapphire Rapids)
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit - Supported with limitations*
- Ubuntu* 20.04 / Kernel 5.15+ long-term support (LTS), 64-bit (minimum requirement for Sapphire Rapids)
- Windows* 10, 64-bit
- Windows* 11
- macOS* 10.15, 64-bit
- MacOS* 11
- MacOS* 12
- Red Hat Enterprise Linux* 8, 64-bit
Note: Macintosh* computers using the M1* processor use the ARM* Device Plug-in on OpenVINO 2022.3 LTS and later. This plugin is community supported; no support is provided by Intel and it doesn't fall under the LTS 2 year support policy. Learn more here: https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html
Intel® Processor Graphics with corresponding operating systems (GEN Graphics)
Intel® HD Graphics
Intel® UHD Graphics
Intel® Iris® Pro Graphics
Intel® Iris® Xe Graphics
Intel® Iris® Xe Max Graphics
Intel® Arc ™ GPU (formerly DG2)
Intel® Data GPU Flex Series Center (formerly Arctic Sound-M)
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit - Supported with limitations*
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
- Windows* 11
- Red Hat Enterprise Linux* 8, 64-bit
- Yocto* 3.0, 64-bit
NOTES:
- This installation requires drivers that are not included in the Intel® Distribution of OpenVINO™ toolkit package.
- A chipset that supports processor graphics is required for Intel® Xeon® processors. Processor graphics are not included in all processors. See Product Specifications for information about your processor.
- Recommended OpenCL™ driver's versions: 21.38 for Ubuntu* 18.04, 21.48 for Ubuntu* 20.04 and 21.49 for Red Hat Enterprise Linux* 8
Intel® Gaussian & Neural Accelerator
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit - Supported with limitations*
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
- Windows* 11, 64-bit
NOTE: Supported with limitations* - Ubuntu 18.04 is shifted to supported with limitations. New Intel hardware launched from the 2022.1 release and beyond will not be supported in Ubuntu 18.0x. Starting 2022.1 (Q1’22), the new recommended operating system version is Ubuntu 20.04. This information was part of the deprecation message in the OpenVINO 2021.x Release Notes.
Operating system's and developer's environment requirements:
- Linux* OS
- A Linux* OS build environment needs these components:
- GNU Compiler Collection (GCC)* 7.5 (Ubuntu 18), 8.4 (RHEL 8) 9.3 (Ubuntu 20)
- CMake* 3.13 or higher
- Python* 3.7-3.10
- OpenCV 4.5
- Ubuntu 18.04 with Linux kernel 5.3
- Ubuntu 20.04 with Linux kernel 5.15 or higher (minimum requirement for Sapphire Rapids)
- RHEL 8 with Linux kernel 5.4
- Higher versions of kernel might be required for 10th Gen Intel® Core™ Processor, 11th Gen Intel® Core™ Processors, 11th Gen Intel® Core™ Processors S-Series Processors, 12th Gen Intel® Core™ Processors, 13th Gen Intel® Core™ Processors , or 4th Gen Intel® Xeon® Scalable Processors to support CPU, GPU, GNA or hybrid-cores CPU capabilities
- Windows* 10 version 20H2
- A Windows* OS build environment needs these components:
- Microsoft Visual Studio* 2019
- CMake 3.14 or higher
- Python* 3.7-3.10
- OpenCV 4.5
- Intel® HD Graphics Driver. Required only for GPU.
- Windows* 11 version 20H2
- A Windows* OS build environment needs these components:
- Microsoft Visual Studio* 2019
- CMake 3.14 or higher
- Python* 3.7-3.10
- OpenCV 4.5
- Intel® HD Graphics Driver. Required only for GPU.
- macOS* 10.15
- A macOS build environment requires these components:
- Xcode* 10.3
- OpenCV 4.5
- Python 3.7-3.10
- CMake 3.13 or higher
- macOS* 12.X
- A macOS build environment requires these components:
- Xcode* 10.3
- OpenCV 4.5
- Python 3.7-3.10
- CMake 3.13 or higher
-
Note: Macintosh* computers using the M1* processor use the ARM Device Plug-in on OpenVINO 2022.3 LTS and later. This plugin is community supported; no support is provided by Intel and it doesn't fall under the LTS 2 year support policy. Learn more here: https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html
- DL frameworks versions:
- TensorFlow* 1.15, 2.5
- MXNet* 1.7.0
- ONNX* 1.12.0
- PaddlePaddle* 2.3
OpenVINO™ Development Tools
- Included list of components and their changes:
- Common changes:
- from openvino.tools.mo import convert_model
- ov_model = convert_model("resnet50.onnx")
- OpenVINO model tools are getting more and more pythonic so Python API call "convert_model" was introduced to achieve full Model Optimizer capabilities in Python with no need to switch to the terminal. Instead, you can simply import "convert_model" from openvino.tools.mo and enjoy.
- InferenceEngine::InferRequest::SetBlob() API which allows setting new pre-processing for input blobs will be deprecated in the 2023.0 release and will be removed in the 2023.1 release. At the same moment, NV12 and I420 support from legacy preprocessing will be deprecated in the 2023.0 and removed in the 2023.1 release. To use NV12 or i420 pre-processing, it is required to migrate to OpenVINO 2.0 API.
- Deprecating option --data_type as from now new option --compress_to_fp16=<false/true> should be used to control weights and biases precision. --data_type FP32 is now equals to --compress_to_fp16=false
- Deprecating option --tensorflow_use_custom_operations_config as option --transformations_config should be used instead
- Deprecating options previously stated as deprecated in Model Optimizer help: --disable_fusing, --disable_resnet_optimization, --finegrain_fusing, --enable_concat_optimization, --disable_weights_compression, --disable_weights_compression, --disable_nhwc_to_nchw
- Model Optimizer
- ONNX*:
- Added support for the following operations:
- Unique
- IsNan
- IsInf
- IsFinite
- Added support for the following operations:
- TensorFlow*:
- NOTE: There is no full parity yet between the legacy Model Optimzer TensorFlow frontend and the new TensorFlow Frontend so the primary path for model conversion is still the legacy frontend. Model coverage and performance are continuously improving so some conversion phase failures, performance, and performance accuracy issues might occur in case the model is not yet covered. Known limitations are object detection models and all models with transformation configs, models with TF1/TF2 control flow, Complex type, and training parts.
- NOTE: for "read_model" case only *.pb format is supported while Model Optimizer(or "convert_model" call) will accept other formats as well which are accepted by existing legacy frontend
- Added support for the following operations:
- Unique
- IsNan
- IsInf
- IsFinite
- Tensorflow Frontend is available as a preview feature starting from 2022.3. That means that you can start experimenting with "–use_new_frontend" option passed to Model Optimizer to enjoy improved conversion time for the limited scope of models or directly loading TensorFlow models through "read_model" call.
- ONNX*:
- Post-Training Optimization Tool
- Introduced a new INT8 quantization scheme, "CPU_SPR" value of "target_device" configuration parameter, to increase the throughput of INT8 models on 4th Generation Intel® 4th Xeon® Scalable processor (code name Sapphire Rapids) compared to the default quantization scheme.
- Added specific quantization schemes for GNA 3.0 and GNA 3.5 devices where different combinations of input/weights are supported: int8/int16, int16/int8, int8/int8, and int16/int16.
- Added INT8 quantization support for LSTM cell on CPU device.
- Added INT8 quantization support for GRU cell and SoftSign operation on GNA device.
- Extended models coverage: +5 INT8 models enabled.
NOTES: Post-training Optimization Tool (POT) will be merged with Neural Networks Compression Framework (NNCF) in the next OpenVINO release (2023.0). NNCF will become the recommended tool for post-training and in-training quantization while POT will still be supported during 2023 OpenVINO releases. NNCF will be installed by default with openvino-dev starting in 2023.0.
- Benchmark Tool allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes.
- Accuracy Checker is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets.
- Annotation Converter is a utility that prepares datasets for evaluation with Accuracy Checker.
- Model Downloader and Other Open Model Zoo tools. Moving Open Model Zoo (OMZ) as a source of models to maintenance mode.
- Common changes:
OpenVINO™ Runtime (Inference Engine)
- Common changes
- New documentation for OpenVINO developers with architecture, core components, frontends and operation enabling flow provided.
- Enhanced Conditional Compilation feature enables developers to automatically create a minimal-size OpenVINO build for a specific model or set of models.
- Graph representation
- Introduced opset10. The latest opset contains new operations listed on this page. Not all OpenVINO™ toolkit plugins support every operation in the new opset.
- OpenVINO Python API
- Note: The default and recommended way to get OpenVINO™ Runtime for Python developers is to install via 'pip install openvino'.
- Added contribution guide about the OpenVINO Python API for external developers.
- Added Interpolate, Unique, isInf, isFinite, isNan operators to Python API.
- Upgraded version of pybind (the OpenVINO third-party library) to version 2.10.1
- ONNX (the OpenVINO third-party dependency) has been upgraded to version 1.12.0
- Better alignment of Python API OpenVINO Type class with its C++ counterpart.
- Code and docstrings for ops in Python API have been refactored to the standardized form.
- Deprecated internal modules pyopenvino and offline_transformations, they will be removed in 2023.0
- OpenVINO C API
- Introducing C API 2.0, to support new features introduced in OpenVINO API 2.0, such as dynamic shapes with CPU, pre-processing and post-process API, unified property definition and usage. The new C API 2.0 shares the same library files as the 1.0 API, but with a different header file.
- AUTO device
- Improved support on multiple Intel® Processor Graphics (GPU), with performance hint of cumulative throughput.
- Added capability to pass device-specific (such as CPU or GPU) properties through AUTO configurations.
- Intel CPU
- Full support for Intel® 4th Generation Xeon® processors (code name Sapphire Rapids) with improved performance and broader network coverage. To take advantage of the Intel® Advanced Matrix Extensions (AMX) capability of the processor, it is recommended to use Windows 11 or Linux Kernel of 5.16 or above.
- Improved throughput performance on inference workload for Intel® 12th Generation Core® processors (code name Alder Lake) and Intel® 13th Generation Core® processors (code name Raptor Lake), via throughput performance hint. To take advantage of such hybrid-aware capability, it is recommended to use Windows 11 or Linux Kernel of 5.15 or above.
- Reduced memory consumption with AMX on Intel® 4th Generation Xeon® processors.
- Improved performance for dynamic shapes with optimization with memory and thread management.
- Added low precision support (INT8) to LSTM and all RNN operators. Improved performance for neural networks with Recurrent Neural Network Transducer (RNNT) via quantization support and additional optimizations, such as audio recognitions and handwriting/text recognitions.
- Additional throughput performance improvements for sparse quantized transformers on 4th Generation Intel® Xeon® processors with sparse weights decompression feature.
- Improved performance for NLP workloads.
- Intel® Processor Graphics (GPU)
- Full support for Intel’s discrete graphics cards, Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU with improved performance and broader network coverage. Performance scales with batch size.
- Integration of Intel oneDNN 2.7 optimization library to utilize XMX acceleration.
- Improved performance for NLP models like Stable Diffusion and GPT .
- Improved first inference latency when kernel caching is enabled.
- Developed model caching as a preview feature. First inference latency can be much improved for a limited number of models. Full support will be completed in OpenVINO 2023.0.
- Improved parity between GPU and CPU by supporting 16 new operations.
- Intel® Gaussian & Neural Accelerator (Intel® GNA)
- Introduced support for GRUCell layer
- The requirements on the model size are relaxed
- OpenVINO Runtime C/C++/Python API usage samples
- Updated OpenVINO C API usage samples to C API 2.0.
- ONNX Frontend
- Added support for EyeLike-9, GenerateProposals-9, IsInf-10, IsFinite-10, isNan-10 operators
- Input freezing can be now realized with set_tensor_value method
- Introduced support of ONNX Metadata. The key-values metadata are now read from a model and can be used during inference via the new Meta API. The values are also serialized during IR conversion.
- Fixed handling scenario where scale and zero_point inputs of QuantizeLinear/DequnatizeLinear are scalars (the axis attribute should be ignored in such a case). The problem impacted many quantized models, like ssd_mobilenet and unet.
- Paddle Frontend
- Added support for PaddlePaddle 2.3.
- Improved PaddlePaddle operator coverage, details are available in OpenVINO documentation on PaddlePaddle supported layers
- TensorFlow Frontend
- NOTE: for "read_model" case only *.pb format is supported while Model Optimizer(or "convert_model" call) will accept other formats as well which are accepted by existing legacy frontend
- NOTE: There is no full parity yet between the legacy Model Optimzer frontend and the new Tensorflow Frontend so the primary path for model conversion is the legacy frontend. Model coverage and performance are continuously improving so some conversion phase failures and performance issues might occur in case the model is not yet covered. Known limitations are object detection models and all models with transformation configs, models with TF1/TF2 control flow, Complex type and training parts.
- Tensorflow Frontend is available as a preview feature starting from 2022.3. That means that you can start experimenting with directly loading TensorFlow models through "read_model" call.
Distribution (where to download release)
The OpenVINO product selector tool (available at www.openvino.ai) provides the easiest access to the right packages that match your desired tools/runtime, OS, version & distribution options.
- This 2022.3 LTS release is available on the following distribution channels:
- pypi.org: https://pypi.org/project/openvino-dev/
- DockerHub* https://hub.docker.com/u/openvino
- Release Archives on S3 storage (specifically for C++): https://storage.openvinotoolkit.org/repositories/openvino/packages/
- APT & YUM
OpenVINO Model Server
- Improved model serving documentation at docs.openvino.ai
- New pre-built container images and Dockerfiles:
- Pre-built image with Intel® Data Center Flex and Intel® Arc GPU dependencies
- Added extensions to KServe API which fully enable C++ and Python client libraries from Triton - full compatibility with Triton clients
- Extended client samples from Python and C++ using KServe API
- Preview implementation of the C/C++ API to OpenVINO Model Server internal functions – OpenVINO Model Server can be loaded as a dynamic library and inference calls can be done without network calls and input data copy (with OpenVINO Model Server model management)
- Preview of TF model importer - it allows importing TensorFlow models directly from model repositories without conversion to OpenVINO IR format
Open Model Zoo
NOTE: Moving Open Model Zoo (OMZ) as a source of models to the maintenance mode. Check out model tutorials in Jupyter notebooks (see OpenVINO Notebooks section below)
Extended the Open Model Zoo with additional CNN-pretrained models and pre-generated Intermediate Representations (.xml + .bin). Color coding: replacing 2022.1 models, new, end-of-lifed :
- smartlab-object-detection-0001
- smartlab-object-detection-0002
- smartlab-object-detection-0003
- smartlab-object-detection-0004
- smartlab-action-recognition-0001
- smartlab-sequence-modelling-0001
- smartlab-sequence-modelling-0002
The list of public models extended with the support for the following models:
Model Name | Task | Framework | Publication |
erfnet | Semantic segmentation | Pytorch | 2017 |
OpenVINO Ecosystem
- Jupyter Tutorials
- This tutorial explains how to convert and optimize the YOLOv7 PyTorch model with OpenVINO.
- Demonstrates how to convert and run a stable diffusion model from HuggingFace using OpenVINO. Users can provide input text to generate an image.
- Demonstrates processing 3D point cloud data and then running segmentation with OpenVINO
- YOLOv7 Optimization
- Stable Diffusion Text-to-Image Generation
- 3D Point Cloud Segmentation
- Neural Networks Compression Framework (pip install nncf)
- Added TFOpLambda layer support with TFModelConverter, TFModelTransformer, and TFOpLambdaMetatype.
- Added TensorFlow 2.5.x support.
- Added pruning support for Reshape and Linear operations.
- Introduced experimental support for post-training quantization (PTQ) of models from ONNX framework, added PTQ API for ONNX and samples for image classification, object detection, and semantic segmentation use cases.
- Introduced experimental BootstrapNAS algorithm to find high-performing sub-networks from the super-network optimization.
- New releases: NNCF 2.2.0 and NNCF 2.3.0:
- OpenVINO™ Deep Learning Workbench
- While the cloud instance of DL Workbench on Developer Cloud for the Edge will continue to be developed and maintained, the locally installed version will be deprecated after this release and moved to critical-bug-fix-only mode. This is in recognition of the significantly higher usage and capabilities of the Developer Cloud version used to graphically test performance on a range of Intel hardware. It is recommended for the latest capabilities that developers using the local on-machine version migrate to the edition on Developer Cloud for the Edge.
- Given the locally installed DL Workbench will be deprecated after this release, the current version will now be available in open source on the public GitHub repository, enabling contributions from the community.
- DL Workbench now provides initial integration with the Hugging Face model hub, allowing users to easily import and optimize pre-trained models, which are widely used for natural language processing (NLP) tasks.
- The Model Downloading Page has been redesigned to provide a more intuitive and user-friendly experience for users. The new design allows users to easily search for models, filter results by several categories, and view detailed information about each model.
Known Issues
Jira ID | Description | Component | Workaround | |
1 |
24101 |
Performance and memory consumption may be bad if layers are not 64-bytes aligned. |
GNA plugin |
Try to avoid the layers which are not 64-bytes aligned to make a model GNA-friendly. |
2 |
33132 |
[IE CLDNN] Accuracy and last-tensor checks regressions for FP32 models on ICLU GPU |
clDNN Plugin |
|
3 |
42203 |
Customers from China may experience some issues with downloading content from the new storage https://storage.openvinotoolkit.org/ due to the China firewall |
OMZ |
Please use a branch https://github.com/openvinotoolkit/open_model_zoo/tree/release-01org with links to old storage download.01.org |
4 |
24757 |
The heterogeneous mode does not work for GNA |
GNA Plugin |
Split the model to run unsupported layers on CPU |
5 |
58806 |
For models after POT, the memory consumption and performance may be worse than for original models (i.e., using internal quantization algorithm) |
GNA Plugin |
Do not use POT if the accuracy is satisfying |
6 |
78822 |
The GNA plugin overhead may be unexpectedly great |
GNA Plugin |
N/A |
7 |
80699 |
LSTM sequence models are implemented using tensor iterator. The solution is used to improve FIP and required memory. Performance degradations are expected |
GPU Plugin |
LTSM sequence is processed via tensor iterator which resulted in first inference latency improvements and memory usage decrease. Some performance degradations are expected |
8 |
54460 |
Online package install issue in PRC Region |
Install |
In case of connectivity issues at installation PRC customers are advised to use local repositories mirrors, such as https://developer.aliyun.com/mirror/. |
9 |
89491 |
[Sample] hello_nv12_input_classification sample BatchedBlob not supported |
IE Common, IE Samples |
|
10 |
84812 |
HDDL: benchmark app fails when receiving precompiled .blob file as input model |
IE Samples |
|
11 |
86683 |
App fails to inference after 1 month of operation |
OpenCL driver |
Try to update to the latest OpenCL driver |
Included in This Release
The Intel® Distribution of OpenVINO™ toolkit is available for downloading for three types of operating systems: Windows*, Linux*, and macOS*.
Component | License | Location | Windows | Linux | macOS |
OpenVINO (Inference Engine) C++ Runtime Unified API to integrate the inference with application logic OpenVINO (Inference Engine) Headers |
Dual licensing: Intel® OpenVINO™ Distribution License (Version May 2021) Apache 2.0 |
<install_root>/runtime/*
<install_root>/runtime/include/* |
Yes | Yes | Yes |
OpenVINO (Inference Engine) Python API |
Apache 2.0 |
<install_root>/python/* |
Yes | Yes | Yes |
OpenVINO (Inference Engine) Samples Samples that illustrate OpenVINO C++/ Python API usage |
Apache 2.0 |
<install_root>/samples/* |
Yes | Yes | Yes |
Compile Tool Compile tool is a C++ application that enables you to compile a network |
<install_root>/tools/compile_tool/* |
Yes | Yes | Yes | |
Deployment manager The Deployment Manager is a Python* command-line tool |
Apache 2.0 |
<install_root>/tools/deployment_manager/* |
Yes | Yes | Yes |
Helpful Links
NOTE: Links open in a new window.
All Documentation, Guides, and Resources
Legal Information
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http://www.intel.com/ or from the OEM or retailer.
No computer system can be absolutely secure.
Intel, Atom, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
*Other names and brands may be claimed as the property of others.
Copyright © 2022, Intel Corporation. All rights reserved.
For more complete information about compiler optimizations, see our Optimization Notice.