Notes:
For the Release Notes for the 2022.3 LTS version, refer to Release Notes for Intel® Distribution of OpenVINO™ Toolkit 2022.3 LTS.
For the Release Notes for the 2021 version, refer to Release Notes for Intel® Distribution of OpenVINO™ toolkit 2021.
Release Notes for OpenVINO™ toolkit v.2022.2
Introduction
The OpenVINO™ toolkit helps make your AI inferencing faster and easier to deploy.
New and Changed in the Release 2
Major Features and Improvements Summary
In this standard release, we have fine-tuned our largest update (2022.1) in 4 years to include support for Intel’s latest CPUs and discrete GPUs for more AI innovation and opportunity.
Note: This release is intended for developers that prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long-Term Support (LTS) releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read to get details. For the latest LTS release visit our selector tool.
- Broader model and hardware support - Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.
- NEW: Support for Intel 13th Gen Core Processor for desktop (code-named Raptor Lake).
- NEW: Preview support for Intel’s discrete graphics cards, Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads. Hundreds of models are enabled.
- NEW: Test your model performance with preview support for Intel 4th Generation Xeon® processors (code-named Sapphire Rapids).
- Broader support for NLP models and use cases like text-to-speech and voice recognition. Reduced memory consumption when using Dynamic Input Shapes on CPU. Improved efficiency for NLP applications.
- Frameworks Integrations – More options that provide minimal code changes to align with your existing frameworks
- OpenVINO Execution Provider for ONNX Runtime gives ONNX Runtime developers more choice for performance optimizations by making it easy to add OpenVINO with minimal code changes.
- NEW: Accelerate PyTorch models with ONNX Runtime using OpenVINO™ integration with ONNX Runtime for PyTorch (OpenVINO™ Torch-ORT). Now PyTorch developers can stay within their framework and benefit from OpenVINO performance gains. Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy
- OpenVINO Integration with TensorFlow now supports more deep learning models with improved inferencing performance.
-
NOTE: The above frameworks integrations are not included in the install packages. Visit the respective GitHub links for more information. These products are intended for those who have not yet installed native OpenVINO
- More portability and performance - See a performance boost straight away with automatic device discovery, load balancing & dynamic inference parallelism across CPU, GPU, and more.
- NEW: Introducing a new performance hint (”Cumulative throughput”) in an AUTO device, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
- NEW: Introducing Intel® FPGA AI Suite support which enables real-time, low-latency, and low-power deep learning inference in this easy-to-use package
-
NOTE: The Intel® FPGA AI Suite is not included in our distribution packages, request information here to learn more.
-
Critical bug fixes and enhancements:
Model Optimizer
- Improved the handling of Model Optimizer when importing ONNX models --input and --output parameters in the new ONNX Frontend
- Fixed scenario when both reverse_input_channel and changing the layout are applied in the same conversion
- The new (default) ONNX Frontend naming scheme is now aligned with legacy behavior
AUTO and MULTI Device Plugin
- Added new ”cumulative throughput” performance hint to AUTO, which enables concurrent inferences on multiple hardware devices, such as multiple GPUs.
- Added configuration to exclude a device from the list of device selections using a minus prefix. For example, “-CPU” will prevent the CPU from being selected as a device for inference execution.
Intel CPU
- Integrated OneDNN 2.6.
- Preview support for Intel 4th Generation Xeon® processors (code-named Sapphire Rapids).
- Enabled new operators in CPU plugin, including GenerateProposals-9, RoiAlign-9, MulticlassNMS-9, Eye-9, SoftSign-9, RDFT-9, NonMaximumSupression-9, and IRDFT-9. Refer to Opset-9 for details.
- Implemented a memory sharing mechanism to optimize the memory footprint for dynamic shape input scenario on CPU.
Intel Graphics Processing Units (GPU)
- Improved first inference latency when OpenCL cache is enabled
- Optimized performance of INT8 FullyConnected layer, which especially enhances the performance of BERT and GPT models
Intel Vision Accelerator Design with Intel Movidius VPUs (HDDL)
- Fixed accuracy checker for NETS validation
Intel Gaussian Neural Accelerator (GNA)
- Fixed LOG_WARNING and LOG_DEBUG modes for GNA Plugin and speech_sample.
- Fixed 2D convolution decomposition for POT optimized models
- Fixed importing of models for GNA when tensor names are missing
Open Model Zoo and Examples
- Updated example (classification_sample_async.py) to use asynchronous inference API with recommended parameters.
Python API
- Load data from PyTorch, TensorFlow, and other NumPy array-like objects in Python API
Docker CI
- Updated Dockerfiles to build images for Docker Hub, Red Hat catalog, and Azure Marketplace based on OpenVINO 2022.2
- Sample applications are now pre-compiled inside the container images – no longer necessary to compile samples at runtime
- Model API from Open Model Zoo is now included in dev images
- Removed permissions issues by giving the ‘openvino’ account ownership of /opt/intel directory
- Added git and make to container images to simplify compilation of Open Model Zoo demos – no longer necessary to install these dependencies separately
- Custom OpenCV 4.6 with GAPI is now included in dev images – no longer necessary to run `download_opecv.sh` script at runtime
Neural Networks Compression Framework (pip install nncf)
- Bootstrapped NAS - Hardware-aware Neural Architecture Search for AI model optimization.
- Updated support allows for drop-in replacement of KServe API calls when serving OpenVINO models.
- Added preview of metrics monitoring in Prometheus text format including
- Counters for successful and failed requests
- Gauge metrics for internal inference queue size, in-use per model, and version
- Latency histograms per model and version
- Direct support for PaddlePaddle models – now includes PaddlePaddle model importer enabling deployment for models trained in the PaddlePaddle framework directly from a model repository.
- Changed the sequence of starting gRPC/REST endpoints before the initial loading of models. With this version, the model server initiates gRPC and REST endpoints (if enabled) before models are loaded. Prior to this change, an active network interface was acting as the readiness indicator.
OpenVINO Ecosystem
DL Streamer
This open-sourced, streaming media analytics framework uses OpenVINO™ Runtime in the back-end to optimize AI models on Intel® hardware platforms. For more information, visit dlstreamer.github.io
OpenCV Library
This is an optional dependency for OpenVINO and not included in the toolkit, you may find instructions on how to install it here.
Known Issues
Jira Ticket ID | Description | Component |
---|---|---|
89491 | [Sample]hello_nv12_input_classification sample BatchedBlob not supported | IE Common, IE Samples |
87081 | [TF2][GPU] V2_3D_UNet model failed on last tensor check | IE GPU Plugin |
85005 | Unable to infer .blob generated by compile_tool | IE HDDL Plugin |
88094 | Compiling model with plugin config throws an exception on HDDL | IE HDDL Plugin |
87546 | Security Barrier Camera Demo does not work with custom model | IE Integration |
90232 | MULTI does not support the configuration key “use_device_mem” with single or multi GPU cards. | IE Multi-Device Plugin |
86146 | Different results with Python and C++ benchmark_app | Benchmark App |
89134 | Sporadic Error copying file during Ubuntu20 build | IE Python |
84812 | HDDL: benchmark app fails when receiving precompiled .blob file as input model | IE Tools |
88442 | online installer failing for 2022.1 | Install |
90204 | In Jupyter notebook, the example will hang when using "async_inference" Python API on Mac systems. | Jupyter Notebooks |
86683 | App fails to inference after 1 month of operation | OpenCL driver |
91868 | Migration to oneDNN 2.6 for generic performance improvement, including the newly introduced 4th-gen Xeon platforms. Certain performance regression is observed with INT8 (s8/s8) convolution on platform with certain compute unit configuration and only u8s8 support (3rd gen Xeon platform), causing performance drop for INT8 models (Yolo-V4) consisting of large conv kernel size. This will be fixed in the next release with the integration of oneDNN 2.7. | IE CPU Plugin |
88560 | The binary post-ops feature combining with brgemm-based kernels introduces extra operator overhead with certain configuration, such as inner-product plus fake quantize with s8s8s8 on AVX-512 platforms, causing regression on INT8 models with such configurations, like BERT-large. | IE CPU Plugin |
91908 | forward-tacotron-duration-prediction in quantization with error DLDTInt8Calibrator failed to calibrate dldt model: executor crashed | Model Optimizer |
System Requirements
Disclaimer. Certain hardware (including but not limited to GPU and GNA) requires the installation of specific drivers to work correctly. Drivers might require updates to your operating system, including the Linux kernel, refer to the documentation to learn more. Operating system updates should be handled by the user and are not part of OpenVINO installation.
Intel® CPU processors with corresponding operating systems:
Intel® Atom* processor with Intel® SSE4.2 support
Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
6th - 13th generation Intel® Core™ processors
Intel® Xeon® Scalable Processors
Operating Systems:
- Ubuntu* 20.04 long-term support (LTS), 64-bit (recommended)
- Ubuntu* 18
- Windows* 11, 64-bit
- macOS* 10.15, 64-bit
- Red Hat Enterprise Linux* 8, 64-bit
Intel® Discrete Graphics
Intel® Data Center GPU Flex Series
Intel® Arc™ GPU Series
Intel® Processor Graphics with corresponding operating systems (GEN Graphics)
Intel® HD Graphics
Intel® UHD Graphics
Intel® Iris® Pro Graphics
Intel® Iris® Xe Graphics
Intel® Iris® Xe Max Graphics
Operating Systems:
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 11, 64-bit
- Red Hat Enterprise Linux* 8, 64-bit
- Yocto* 3.0, 64-bit
NOTES:
- A chipset that supports processor graphics is required for Intel® Xeon® processors. Processor graphics are not included in all processors. See Product Specifications for information about your processor.
- Recommended OpenCL™ driver's versions: 21.38 for Ubuntu*, 21.48 for Ubuntu* 20.04 and 21.49 for Red Hat Enterprise Linux* 8
Intel® Gaussian & Neural Accelerator
Operating Systems:
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 11, 64-bit
Intel® FPGA AI Suite
By leveraging a graph compiler, OpenVINO is enabled on the following FPGA devices. For more information, visit this overview. Supported devices include:
- Intel® Agilex™ FPGA
- Intel® Cyclone® 10 GX FPGA
- Intel® Arria® 10 FPGA
VPU processors with corresponding operating systems
Intel® Vision Accelerator Design with Intel® Movidius™ Vision Processing Units (VPU) with corresponding operating systems
Operating Systems:
- Ubuntu* 18 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
Intel® Movidius™ Neural Compute Stick 2 with corresponding operating systems
Operating Systems:
- Ubuntu* 18 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
- Raspbian* (target only)
Depreciation Notice: Intel® Corporation has discontinued the Intel® Movidius™ Neural Compute Stick 2. 2022.3 LTS will be the last version of OpenVINO to support the Neural Compute Stick 2.
AI Edge Computing Board with Intel® Movidius™ Myriad™ X C0 VPU, MYDX x 1 with corresponding operating systems
Operating Systems:
- Windows* 10, 64-bit
NOTE: Supported with limitations* - Ubuntu 18.04 is shifted to supported with limitations. New Intel hardware launched from the 2022.1 release and beyond will not be supported in Ubuntu 18.0x. Starting 2022.1 (Q1’22), the new recommended operating system version is Ubuntu 20.04. This information was part of the deprecation message in the OpenVINO 2021.x Release Notes.
Depreciation Notice; Starting in release 2022.3 OpenVINO will no longer support Python 3.6 due to end of the support by the Python community. Update to a newer version (currently 3.7-3.9) to avoid interruptions in your next OpenVINO update.
Operating system's and developer's environment requirements:
- Linux OS
- Ubuntu 20.04 with Linux kernel 5.4
- RHEL 8 with Linux kernel 5.4
- Higher versions of kernel are required for Ice Lake, Tiger Lake, and Alder Lake for GPU capabilities
- A Linux OS build environment needs these components:
- Python* 3.6-3.9
- Intel® HD Graphics Driver. Required for inference on GPU.
- Note: GNU Compiler Collection and CMake are needed for building from source:
- GNU Compiler Collection (GCC)* 8.4 (RHEL 8) 9.3 (Ubuntu 20)
- CMake* 3.10 or higher
- Windows* 11 (10 version 20H2 recommended)
- A Windows* OS build environment needs these components:
- Microsoft Visual Studio* 2019
- CMake* 3.14 or higher
- Python* 3.6-3.9
- Intel® HD Graphics Driver. Required for inference on GPU.
- A Windows* OS build environment needs these components:
- macOS* 12 (10.15 recommended)
- A macOS* build environment requires these components:
- Xcode* 10.3
- OpenCV* 4.5
- Python* 3.7-3.9
- CMake* 3.13 or higher
- A macOS* build environment requires these components:
- DL frameworks versions:
- TensorFlow* 1.15, 2.5
- MxNet* 1.7.0
- ONNX* 1.8.1
-
NOTE: This package can be installed on other versions of the DL framework, but only the specified versions above are fully validated.
Included in This Release
- The OpenVINO™ toolkit is available for download for three types of operating systems: Windows*, Linux*, and macOS*.
Component | License | Location | Windows | Linux | macOS |
---|---|---|---|---|---|
OpenVINO™ (Inference Engine) C++ Runtime Unified API to integrate the inference with application logic OpenVINO™ (OpenVINO Runtime) Headers |
EULA Apache 2.0 |
<install_root>/runtime/* <install_root>/runtime/include/*
|
YES | YES | YES |
OpenVINO™ (Inference Engine) Python API | Apache 2.0 | <install_root>/python/* (Not necessary for PIP install) |
YES | YES | YES |
OpenVINO™ (Inference Engine) Samples Samples that illustrate OpenVINO™ C++/ Python API usage |
Apache 2.0 | <install_root>/tools/compile_tool/* | YES | YES | YES |
Compile Tool Compile tool is a C++ application that enables you to compile a network |
EULA | <install_root>/tools/compile_tool/* | YES | YES | YES |
Deployment manager The Deployment Manager is a Python* command-line tool |
Apache 2.0 | <install_root>/tools/deployment_manager/* | YES | YES | YES |
Where to Download This Release
The OpenVINO product selector tool provides the easiest access to the right packages that matches your desired tools/runtime, OS, version & distribution options.
This 2022.2 release is available on the following distribution channels:
Release Notes for Intel® Distribution of OpenVINO™ toolkit v.2022
Introduction
The Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on the latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI, and deep learning inference deployed from edge to cloud.
The Intel® Distribution of OpenVINO™ toolkit:
- Enables deep learning inference from edge to cloud.
- Supports heterogeneous execution across Intel accelerators, using a common API for the Intel® CPU, Intel® Integrated Graphics, Intel® Discrete Graphics, Intel® Gaussian & Neural Accelerator, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
- Allows optimizing inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization.
- Speeds time-to-market through an easy-to-use library of CV functions and pre-optimized kernels.
- Includes optimized calls for CV standards, including OpenCV* (available as a separate download) and OpenCL™.
New and Changed in the Release 1
Major Features and Improvements Summary
This release is the biggest upgrade in 3.5 years! Read the release notes below for a summary of the changes.
2022.1 release provides functional bug fixes, and capability changes for the previous 2021.4.2 LTS release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability, and higher inferencing performance with fewer code changes.
Note: This is a standard release intended for developers that prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long-Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details. Latest LTS releases: 2020.x LTS and 2021.x LTS.
- Updated, cleaner API:
- New OpenVINO API 2.0 was introduced. The API aligns OpenVINO inputs/outputs with frameworks. Input and output tensors use native framework layouts and element types. Old Inference Engine and nGraph APIs are available but will be deprecated in a future release down the road.
- inference_engine, inference_engine_transformations, inferencengine_lp_transformations, and ngraph libraries were merged to the common openvino library. Other libraries were renamed. Use the common ov:: namespace inside all OpenVINO components. See how to implement Inference Pipeline using OpenVINO API v2.0 for details.
- Model Optimizer’s API parameters have been reduced to minimize complexity. Performance has been significantly improved for model conversion on ONNX models.
- It’s highly recommended to migrate to API 2.0 because it already has additional features and this list will be extended later. The following list of additional features is supported by API 2.0:
-
Working with dynamic shapes. The feature is quite useful for the best performance for Neural Language Processing (NLP) models, super-resolution models, and other which accepts dynamic input shapes. Note: Models compiled with dynamic shapes may show reduced performance and consume more memory than models configured with a static shape on the same input tensor size. Setting upper bounds to reshape the model for dynamic shapes or splitting the input into several parts is recommended.
-
Preprocessing of the model to add preprocessing operations to the inference models and fully occupy the accelerator and free CPU resources.
-
-
Read the Transition Guide for migrating to the new API 2.0.
- Portability and Performance:
-
New AUTO plugin self-discovers available system inferencing capacity based on the model requirements, so applications no longer need to know its compute environment in advance.
-
The OpenVINO™ performance hints are the new way to configure the performance with portability in mind. The hints “reverse” the direction of the configuration in the right fashion: rather than map the application needs to the low-level performance settings, and keep an associated application logic to configure each possible device separately, the idea is to express a target scenario with a single config key and let the device to configure itself in response. As the hints are supported by every OpenVINO™ device, this is a completely portable and future-proof solution.
-
Automatic batching functionality via code hints automatically scales batch size based on XPU and available memory.
-
-
Broader Model Support:
-
With Dynamic Input Shapes capabilities on CPU, OpenVINO will be able to adapt to multiple input dimensions in a single model providing more complete NLP support. Dynamic Shapes support on additional XPUs expected in a future dot release.
-
-
New Models with focus on NLP and a new category, Anomaly detection, and support for conversion and inference of select PaddlePaddle models:
-
Pre-trained Models: Anomaly segmentation focus on industrial inspection making Speech denoising trainable plus updates on speech recognition and speech synthesis
-
Combined Demo: Noise reduction + speech recognition + question answering + translation+ text to speech
-
Public Models: Focus on NLP ContextNet, Speech-Transformer, HiFi-GAN, Glow-TTS, FastSpeech2, and Wav2Vec
-
-
Built with 12th Gen Intel® Core™ “Alder Lake” in mind. Supports the hybrid architecture to deliver enhancements for high-performance inferencing on CPU & integrated GPU
System Requirements
Disclaimer. Certain hardware (including but not limited to GPU and GNA) requires the installation of specific drivers to work correctly. Drivers might require updates to your operating system, including the Linux kernel, refer to their documentation to learn more. Operating system updates should be handled by the user and are not part of OpenVINO installation.
Intel® CPU processors with corresponding operating systems
Intel® Atom* processor with Intel® SSE4.2 support
Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
6th - 12th generation Intel® Core™ processors
Intel® Xeon® Scalable Processors (formerly Skylake)
2nd Generation Intel® Xeon® Scalable Processors (formerly Cascade Lake)
3rd Generation Intel® Xeon® Scalable Processors (formerly Cooper Lake and Ice Lake)
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit - Supported with limitations*
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
- macOS* 10.15, 64-bit
- Red Hat Enterprise Linux* 8, 64-bit
Intel® Processor Graphics with corresponding operating systems (GEN Graphics)
Intel® HD Graphics
Intel® UHD Graphics
Intel® Iris® Pro Graphics
Intel® Iris® Xe Graphics
Intel® Iris® Xe Max Graphics
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit - Supported with limitations*
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
- Red Hat Enterprise Linux* 8, 64-bit
- Yocto* 3.0, 64-bit
NOTES:
- This installation requires drivers that are not included in the Intel® Distribution of OpenVINO™ toolkit package.
- A chipset that supports processor graphics is required for Intel® Xeon® processors. Processor graphics are not included in all processors. See Product Specifications for information about your processor.
- Recommended OpenCL™ driver's versions: 21.38 for Ubuntu* 18.04, 21.48 for Ubuntu* 20.04 and 21.49 for Red Hat Enterprise Linux* 8
Intel® Gaussian & Neural Accelerator
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit - Supported with limitations*
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
VPU processors with corresponding operating systems
Intel® Vision Accelerator Design with Intel® Movidius™ Vision Processing Units (VPU) with corresponding operating systems
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit (Linux Kernel 5.2 and below) - Supported with limitations*
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2 with corresponding operating systems
Operating Systems:
- Ubuntu* 18.04 long-term support (LTS), 64-bit - Supported with limitations*
- Ubuntu* 20.04 long-term support (LTS), 64-bit
- Windows* 10, 64-bit
- Raspbian* (target only)
AI Edge Computing Board with Intel® Movidius™ Myriad™ X C0 VPU, MYDX x 1 with corresponding operating systems
Operating Systems:
- Windows* 10, 64-bit
NOTE: Supported with limitations* - Ubuntu 18.04 is shifted to supported with limitations. New Intel hardware launched from the 2022.1 release and beyond will not be supported in Ubuntu 18.0x. Starting 2022.1 (Q1’22), the new recommended operating system version is Ubuntu 20.04. This information was part of the deprecation message in the OpenVINO 2021.x Release Notes.
Operating system's and developer's environment requirements:
- Linux* OS
- Ubuntu 18.04 with Linux kernel 5.3
- Ubuntu 20.04 with Linux kernel 5.4
- RHEL 8 with Linux kernel 5.4
- Higher versions of kernel are required for Ice Lake, Tiger Lake, Alder Lake for GPU capabilities
- A Linux* OS build environment needs these components:
- GNU Compiler Collection (GCC)* 7.5 (Ubuntu 18), 8.4 (RHEL 8) 9.3 (Ubuntu 20)
- CMake* 3.10 or higher
- Python* 3.6-3.9
- OpenCV 4.5
- Windows* 10 version 20H2
- A Windows* OS build environment needs these components:
- Microsoft Visual Studio* 2019
- CMake 3.14 or higher
- Python* 3.6-3.9
- OpenCV 4.5
- Intel® HD Graphics Driver. Required only for GPU.
- A Windows* OS build environment needs these components:
- macOS* 10.15
- A macOS build environment requires these components:
- Xcode* 10.3
- OpenCV 4.5
- Python 3.6-3.9
- CMake 3.13 or higher
- A macOS build environment requires these components:
- DL frameworks versions:
- TensorFlow* 1.15, 2.5
- MxNet* 1.7.0
- ONNX* 1.8.1
Release Notes for the Intel® Distribution of OpenVINOTM toolkit 2022.1.1
Minor updates and bug fixes for specific use cases and scenarios.
This release provides functional bug fixes and capability updates from the previous release 2022.1 that enable developers.
Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long-Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.
Component updates:
OpenVINO Runtime:
-
Fixed memory leaks with DLLs not unloading and threads lingering in some advanced/rare use cases
-
Added way to unload TBB libraries upon OpenVINO library unloading - use ov::force_tbb_terminate option to ov::Core
-
Added way to unload OpenVINO frontend libraries for cases when IR / ONNX / PDPD files are read in Runtime. Users should call ov::shutdown once they finish work with OpenVINO library to free all resources
You can find the OpenVINO™ 2022.1.1 release here:
-
Download archives* with OpenVINO™ Runtime for C/C++Download archives* with OpenVINO™ Runtime for C/C++
Limitations of this release:
-
Windows OS, Linux, and macOS
-
Intel® Movidius™ Myriad™ X plugin is not included
-
The Intel® version of OpenCV will not be included, visit this guide on how to use the community version
-
For product specifications, visit the release notes on OpenVINO toolkit 2022.1
OpenVINO™ Development Tools (pip install openvino-dev)
- NOTE: New default and recommended way to get OpenVINO™ Development Tools is to install them via 'pip install openvino-dev'. Included list of components and their changes:
-
Model Optimizer
- API 2.0 changes:
- New IR v11 version was introduced. The new IR v11 version aligns inputs and outputs representation with the native framework format to pass frameworks models to OpenVINO without conversion. Legacy OpenVINO API remains supported to keep backward compatibility with existing OpenVINO use cases. Both APIs accept v10 and v11 IRs for different use cases. OpenVINO 2.0 API uses legacy conventions to support IR v10, and, in addition, if a developer calls an old API with IR v11 models it will be handled via the legacy behavior path in runtime.
- Model layout, data type, and order of Parameter and Result nodes are now aligned with the original framework. As the data type is now aligned it means that Model Optimizer can produce IRs with I64 and FP64 in it in case such types are present in the original model.
- New CLI parameters were introduced to specify model inputs and outputs source/target layout for the Model Optimizer:
--source_layout
--target_layout
and--layout
- Read the Transition Guide for migrating to the new API 2.0.
- Common changes:
- Switched to a new way of converting models to FP16 data type. If
--data_type=FP16
is specified only constants will be stored in FP16 data type while inputs and outputs will keep the same data type as in the original model. - Aligned model optimizer namespace with other OpenVINO tools. Now all Model Optimizer classes, function definitions, and declarations should be imported as openvino.tools.mo.
- To improve the out-of-the-box (OOB) experience:
- Automatic detection of
--disable_nhwc_to_nchw
was implemented. --input_shape
is now optional in case input rank is not dynamic and can be omitted at model conversion which will produce IR with undefined dimensions
- Automatic detection of
- New Pruning transformation, which is responsible for removing zeroed weights from convolutional and matrix multiplication layers is now available and can be applied to models, optimized by the filter pruning algorithm from Neural Networks Compression Framework (NNCF). New CLI parameter --transform=Pruning should be fed to MO to enable Pruning transformation.
- Removed previously deprecated options
--generate_deprecated_IR_V7
and--legacy_ir_generation
- Deprecated MO option related to Caffe
–-mean_file
--mean_file_offsets
- Switched to a new way of converting models to FP16 data type. If
- ONNX*:
- Model Optimizer has switched by default to the ONNX Frontend which will significantly reduce the model conversion time
- Extended support of Gather and Slice operators (negative indices, non-constant axes).
- Extended support of MaxPool with 2 outputs: values and indices.
- Extended support for multiple operators when dynamic shapes are in use:
- ConstantOfShape
- Einsum
- Expand
- Loop
- NonMaxSuppression
- NonZero
- Pad
- Range
- ReduceSum
- Reshape
- Resize
- Tile
- Upsample
- Added support for the following operations:
- If
- Quantized operators:
- ConvInteger
- MatMulInteger
- QLinearConv
- QLinearMatMul
- Support of random number generators:
- RandomNormal
- RandomNormalLike
- RandomUniform
- RandomUniformLike
- TensorFlow*:
- Extended support for Wide & Deep family models that contain SparseSegmentMean operation and removed I64 inputs limitation
-
Added support for the following operations:
- RandomUniform
-
If
-
Added support for the following operations with limitations:
-
EmptyTensorList
-
TensorListPushBack
-
- MXNet*:
-
Added support for the following operations:
- batch_dot
- LayerNorm
- contrib.arange_like
- contrib.div_sqrt_dim
-
-
Post-Training Optimization Tool
- POT source code has been moved to GitHub as a subfolder inside OpenVINO repo. The license has been changed to Apache-2.0. External contributions to POT are now allowed.
- Added INT8 Quantization support for GNA via POT CLI.
- AccuracyAware (AA) quantization method (INT16+INT8) for GNA.
- Unified scales for Concat operations.
- Improved INT8 quantization scheme for Transformer-based models for newer Intel processors.
- Memory usage optimizations to reduce memory used by POT during the quantization.
- Support for new OpenVINO 2.0 API.
- Added support of IRv11. POT does not support IRv10 since OpenVINO 2022.1 and throws an exception if an older IR version is used.
- Removed support of TunableQuantization algorithm. 2021.4 LTS was the final release where this algorithm was supported in POT.
- Extended models coverage: +70 INT8 models enabled.
- (Experimental) Ranger algorithm for model protection in safety-critical cases.
- Benchmark Tool allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes.
- Accuracy Checker is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics.
- Annotation Converter is a utility that prepares datasets for evaluation with Accuracy Checker.
- Model Downloader and Other Open Model Zoo tools
- Model Downloader Loads Open Model Zoo pre-trained Intel and public models to the specified folder.
-
Model Converter - MO launcher with predefined configuration for Open Model Zoo public models.
- Model Quantizer - POT launcher with predefined configuration for Open Model Zoo public models.
- Model Info Dumper - Prints basic model information.
- Data Downloader - Loads model-related data to a specified folder.
OpenVINO™ (Inference Engine) Runtime
-
Common changes
-
New OpenVINO API 2.0 was introduced. The API aligns OpenVINO inputs/outputs with frameworks. Input and output tensors use native framework layouts and element types. Old Inference Engine and nGraph APIs are available but will be deprecated in a future release down the road.
-
inference_engine, inference_engine_transformations, inferencengine_lp_transformations, and ngraph libraries were merged to common openvino library. Other libraries were renamed. Use common ov:: namespace inside all OpenVINO components. Read how to implement Inference Pipeline using OpenVINO API v2.0 for details.
-
Model Optimizer’s API parameters have been reduced to minimize complexity. Performance has been significantly improved for model conversion on ONNX models.
-
It’s highly recommended to migrate to API 2.0 because it already has additional features and this list will be extended later. The following list of additional features is supported by API 2.0:
- Working with dynamic shapes. The feature is quite useful for the best performance for Neural Language Processing (NLP) models, super-resolution models, and other which accepts dynamic input shapes. Note: Models compiled with dynamic shapes may show reduced performance and consume more memory than models configured with a static shape on the same input tensor size. Setting upper bounds to reshape the model for dynamic shapes or splitting the input into several parts is recommended.
-
Preprocessing of the model to add preprocessing operations to the inference models and fully occupy the accelerator and free CPU resources.
-
Read the Transition Guide for migrating to the new API 2.0.
-
-
Graph representation
-
Introduced opset8. The latest opset contains the new operations listed on this page. Not all OpenVINO™ toolkit plugins support the operations.
-
-
OpenVINO Python API
-
Note: New default and recommended way to get OpenVINO™ Runtime for Python developers is to install it via 'pip install openvino'.
-
New OpenVINO Python API based on OpenVINO 2.0 API was introduced. Old nGraph Python API and Inference Engine Python API are available but will be deprecated in a future release.
-
As part of Python API 2.0 additional features were released:
-
Changed layout of the Python API package. Now common API is part of openvino.runtime, openvino.preprocess.
-
AsyncInferQueue was added for simple and efficient work with asynchronous API.
-
Changed the way of creating InferRequests — now it’s aligned with C++ API.
-
Extended support for input parameters in inference methods. All synchronous inference methods return results.
-
CompiledModel object may be created without an explicit call to Core.
-
Call to CompiledModel (__call__) hides the creation of one InferRequest and provides an easy way to run single synchronous inference.
-
Extended support for Tensor:
-
Create a Tensor object directly from Numpy array by sharing memory with an array or copying data to the Tensor.
-
Create an empty Tensor object with a specified data type and shape, and populate it with data.
-
-
-
-
AUTO device
-
Newly introduced AUTO device that automatically selects execution device for inference among CPU, GPU, and VPU if available.
-
Improved First Inference Latency if GPU/VPU is selected for inference, by running inference with CPU plugin initially while loading network to the selected device, and then hot-swapping to a selected device.
-
Support performance hints such as latency and throughput without the need to provide device configuration details.
-
-
Intel® CPU
-
Added support for models with dynamic shapes. The feature includes full functional coverage for both external and internal dynamism types with performance fine-tuning for NLP and instance segmentation scenarios.
-
Implemented model caching support. The feature allows to significantly improve first inference latency.
-
Improved inference performance for non-vision use cases with a primary focus on Bert-based models.
-
Improved inference performance for 1D models which mostly suitable for audio use cases.
-
Improved inference performance for extremely light-weighted models by reducing non-computational overheads.
-
Added functionality that computes an optimal number of streams for throughput performance hint.
-
Introduced Snippets component and Snippets CPU backend. Snippets provide automatic JIT code generation capabilities on target HW and use generic compiler optimization technics to achieve the best performance. This enables optimal inference performance reachability on a broad set of models.
-
Added support for new operations:
-
AdaptiveAvgPool-8
-
AdaptiveMaxPool-8
-
DeformableConvolution-8
-
DetectionOutput-8
-
Gather-8
-
GatherND-8
-
I420toBGR-8
-
I420toRGB-8
-
If-8
-
MatrixNms-8
-
MaxPool-8
-
MulticlassNms-8
-
NV12toBGR-8
-
NV12toRGB-8
-
PriorBox-1
-
PriorBoxClustered-1
-
Slice-8
-
Softmax-8
-
-
-
Intel® Processor Graphics (GPU)
-
Set of the First inference latency improvements were implemented:
-
Choose kernels implementation in parallel for graph nodes
-
Optimize includes in OpenCL kernels code
-
Minimization of macros in OpenCL kernels code
-
Affinity control for large and small core scenarios
-
Enabling bathed kernels compilation on 11th Gen Core platforms
-
-
Updated infer request logic to handle USM memory and Buffers on integrated and discrete GPUs
-
USM allocation for host and device buffers was implemented
-
Added support for new operations:
-
Loop-5
-
Gather-8
-
ExperimentalDetectronROIFeatureExtractor
-
GatherElements
-
GatherND-8
-
DeformableConvolution-8
-
RandomUniform-8
-
MaxPool-8
-
Slice-8
-
ROIAlign-3
-
Gelu-7
-
ExperimentalDetectronTopKROIs
-
-
Enables dedicated configuration for each GPU device
-
Multi-tile and multi-GPU support was enabled
-
It makes it possible to run inference on one or several tiles in case of multi-tile GPU
-
API to detect and select a particular tile is provided
-
-
The following set of performance optimizations were done:
-
Optimized detection output operation
-
NMS operation was created
-
Optimizations for fsv16 layout
-
Depth-wise convolution optimizations
-
Extended eltwise fusion with other operation types
-
Implementation of normalization for blocked layouts
-
Optimization of reducing for feature dimension
-
-
Query to fetch the maximal possible batch size for a given network on a particular device was implemented
-
-
Intel® Movidius™ VPU (Myriad)
-
Added OpenVINO API 2.0 support
-
Added new configuration API from OpenVINO API 2.0
-
Renamed plugin to openvino_intel_myriad_plugin and moved it to src/plugins/intel_myriad
-
Added support of PERFORMANCE_HINT config option
-
Allowed import/export methods to restore information about Function's inputs and outputs
-
Allowed QueryNetwork method to reflect dynamic operations
-
Added MYRIAD_ENABLE_MX_BOOT private plugin option to prevent booting MyriadX while compiling a vpu model
-
Updated digital signature of mxlink windows kernel driver for M.2 PCIe MyriadX device
-
-
Intel® Vision Accelerator Design with Intel® Movidius™ VPUs (HDDL)
-
Added OpenVINO API 2.0 support
-
Added new configuration API from OpenVINO API 2.0
-
Added support of PERFORMANCE_HINT config option
-
Allowed import/export methods to restore information about Function's inputs and outputs
-
Allowed QueryNetwork method to reflect dynamic operations
-
Updated protobuf version from 3.7.1 to 3.19.4 in HDDL Services to include security fixes
-
Fixed sleep function usage from C++ Windows API
-
Renamed plugin to openvino_intel_hddl_plugin
-
Supported Linux kernel version up to 5.11.1
-
-
Intel® Gaussian & Neural Accelerator (Intel® GNA)
-
Improved runtime memory consumption by optimizing the use of internal memory buffers
-
Extended supported parameters for 2D convolutions (available on GNA3.0 HW)
-
Fixed an accuracy issue with the models produced with the "performance" mode of the Post-Training Optimization Tool (POT)
-
Fixed an issue related to the handling of Transpose, Assign, and large elementwise layers
-
Fixed issues related to MatMul - Add, Concat - MatMul, and some other patterns with MatMul layer
-
Fixed several issues with the Convolution layer
-
Improved performance and memory consumption for Split layers which are not 64B aligned
-
Improved accuracy for elementwise layers with inputs having very different dynamic ranges
-
Fixed issues with export and import
-
Improved diagnostics for models unsupported by the plugin
-
-
OpenVINO Runtime C/C++/Python API usage samples
- C++ and Python samples migrated to OpenVINO 2022.1 API, command-line interface was simplified, to highlight the concept that OpenVINO samples are focused on demonstrating the basics of OpenVINO API usage and should not be considered universal tools.
-
ONNX Frontend
-
Direct import of ONNX models to OpenVINO now is available via ONNX Frontend API. The original, low-level ONNX Importer API is still available but will be deprecated so it's not recommended to use it anymore. Use the OV public API to work with the ONNX models.
-
-
Paddle Frontend
-
Support the conversion and inference of 13 PaddlePaddle models through Model Optimizer and OpenVINO Runtime directly.
-
The enabled models include detection (YoloV3, PPYolo, SSD-MobileNetV3), classification (Resnet-50, MobileNet-v2, MobileNet-V3), semantic segmentation (BiSeNetV2, DeepLabV3p, FastSCNN, OCRNet, U-Net), OCR (PPOCR), and NLP (BERT).
-
Open Model Zoo
Extended the Open Model Zoo with additional CNN-pretrained models and pre-generated Intermediate Representations (.xml
+ .bin
).
New models:
- facial-landmarks-98-detection-0001
- handwritten-english-recognition-0001
- instance-segmentation-person-0007
- text-recognition-0016-encoder
- text-recognition-0016-decoder
- noise-suppression-denseunet-ll-0001
- person-detection-0301
- person-detection-0302
- person-detection-0303
- smartlab-object-detection-0001
- smartlab-object-detection-0002
- smartlab-object-detection-0003
- smartlab-object-detection-0004
- smartlab-sequence-modelling-0001
-
machine-translation-nar-en-ru-0002
-
machine-translation-nar-ru-en-0002
End-of-life models:
- machine-translation-nar-en-ru-0001
- machine-translation-nar-ru-en-0001
The list of public models extended with the support for the following models:
Model Name |
Task |
Framework |
Publication |
---|---|---|---|
background-matting-mobilenetv2 | background matting | Pytorch | 2020 |
gpt-2 | text prediction | Pytorch | 2019 |
detr-resnet50 | object detection | Pytorch | 2020 |
drn-d-38 | semantic segmentation |
Pytorch | 2017 |
hybrid-cs-model-mri | medical imaging | TensorFlow | 2018 |
t2t-vit-14 | classification | Pytorch | 2021 |
mobilenet-yolo-v4-syg | object detection | Keras/TensorFlow | 2020 |
robust-video-matting-mobilenetv3 | backround matting | Pytorch | 2021 |
swin-tiny-patch4-window7-224 | classification | Pytorch | 2021 |
vitstr-small-patch16-224 | text recognition | Pytorch | 2021 |
wav2vec2-base | speech recognition | Pytorch | 2020 |
yolo-v3-onnx | object detection | ONNX | 2020 |
yolo-v3-tiny-onnx | object detection | ONNX | 2020 |
yolof | object detection | Pytorch | 2021 |
yolox-tiny | object detection | Pytorch | 2021 |
The list of deprecated public models (note: deprecated OMZ models still supported by OpenVINO):
Model Name |
Task |
Framework |
---|---|---|
ctdet_coco_dlav0_384 | object detection | ONNX |
densenet-121-caffe2 | classification | Caffe2 |
densenet-161 | classification | Caffe |
densenet-161-tf | classification | TensorFlow |
densenet-169 | classification | Caffe |
densenet-169-tf | classification | TensorFlow |
densenet-201 | classification | Caffe |
densenet-201-tf | classification | TensorFlow |
efficientnet-b0_auto_aug | classification | TensorFlow |
efficientnet-b5 | classification | TensorFlow |
efficientnet-b5-pytorch | classification | Pytorch |
efficientnet-b7_auto_aug | classification | TensorFlow |
efficientnet-b7-pytorch | classification | Pytorch |
faster_rcnn_inception_v2_coco | object detection | TensorFlow |
faster_rcnn_resnet101_coco | object detection | TensorFlow |
hbonet-0.5 | classification | Pytorch |
mask_rcnn_inception_v2_coco | instance segmentation |
TensorFlow |
mask_rcnn_resnet101_atrous_coco | instance segmentation |
TensorFlow |
mobilenet-v1-0.50-160 | classification | TensorFlow |
mobilenet-v1-0.50-224 | classification | TensorFlow |
octave-densenet-121-0.125 | classification | Mxnet |
octave-resnet-101-0.125 | classification | Mxnet |
octave-resnet-200-0.125 | classification | Mxnet |
octave-resnet-50-0.125 | classification | Mxnet |
octave-resnext-101-0.25 | classification | Mxnet |
octave-resnext-50-0.125 | classification | Mxnet |
octave-se-resnet-50-0.125 | classification | Mxnet |
resnet-50-caffe2 | classification | Caffe2 |
se-resnet-101 | classification | Caffe |
se-resnet-152 | classification | Caffe |
se-resnext-101 | classification | Caffe |
squeezenet-1.1-caffe2 | classification | Caffe2 |
ssd_mobilenet_v2_coco | object detection | TensorFlow |
ssd_resnet50_v1_fpn_coco | object detection | TensorFlow |
vgg-19-caffe2 | classification | Caffe2 |
Open Model Zoo demos migrated to OpenVINO 2022.1 API. Note, starting from OpeVINO 2022.1 release Open Model Zoo demos are not a part of OpenVINO install package and provided in the GitHub repository. Refer to OpenVINO documentation for details on getting and building Open Model Zoo demos. Open Model Zoo Model API extended with support remote inference through integration with OpenVINO Model Server.
Added new demo applications:
-
background_subtraction_demo/python
- classification_benchmark_demo/cpp
- gpt2_text_prediction_demo/python
- mri_reconstruction_demo/cpp
- mri_reconstruction_demo/python
- speech_recognition_wav2vec_demo/python
OpenVINO™ Ecosystem
-
Jupyter Tutorials
- Added new tutorials:
-
403: Human Action Recognition
- 110: MONAI medical imaging training notebook (PyTorch Lighting)
- 111:Object detection quantization (POT)
- 112: Post-Training Quantization of PyTorch models with NNCF
- 113: Image Classification Quantization (POT)
- 209: Handwritten OCR
- 211: Speech to text
- 212: ONNX Style transfer
- 213: Question-Answering (NLP)
-
- Added new tutorials:
-
Neural Networks Compression Framework (pip install nncf)
- Changes in NNCF v2.0.0, v2.0.1, v2.0.2 and v2.1.0 releases:
- Common API for compression methods for PyTorch and TensorFlow frameworks.
- Added TensorFlow 2.4.x support - NNCF can now be used to apply the compression algorithms (INT8 Quantization, Sparsity, and Filter Pruning plus mixing of them) to models trained in TensorFlow via Keras Sequential and Functional APIs.
- AccuracyAware (AA) method for Filter Pruning and Sparsity optimization algorithms to allow NNCF users to define the maximum accuracy drop which is considered during the optimization.
- Early Exit method for INT8 Quantization to speed up the fine-tuning during the quantization by ending the process when the defined maximum accuracy drop is achieved.
- 7-bit quantization for weights to mitigate the saturation issue with the accuracy on non-VNNI CPU.
- Added quantization presets to be specified in NNCF config: Performance and Mixed.
- Added an option to specify an effective learning rate multiplier for the trainable parameters of the compression algorithms via NNCF config.
- Unified scales for Concat operations.
- Support of PyTorch 1.9.1.
- Bumped integration patch of HuggingFace transformers to 4.9.1.
- Knowledge Distillation algorithm as experimental. Available for PyTorch only.
- LeGR Pruning algorithm as experimental. Available for PyTorch only.
- Algorithm to search basic building blocks in model's architecture as experimental. Available for PyTorch only.
- Changes in NNCF v2.0.0, v2.0.1, v2.0.2 and v2.1.0 releases:
-
OpenVINO™ Deep Learning Workbench (pip install openvino-workbench)
- Initial support for Natural Language Processing (NLP) models. Now models supporting the Text Classification use case can be imported, converted, and benchmarked.
- Support for OpenVINO API 2.0 enabled in tools and educational materials.
- Support for Cityscapes dataset enabled.
-
OpenVINO™ Model Server
- Support for dynamic shape in the models
By leveraging the new OpenVINO API v2.0, OpenVINO Model Server now supports configuring model inputs to accept a range of input shape dimensions and variable batch size. This enables sending predict requests with various image resolutions and batches. - Model cache for faster loading and initialization
The cached files make the Model Server initialization faster when performing subsequent model loading. Cache files can be reused within the same Model Server version, target device, hardware, model, model version, and model parameters. - Support for double precision models
OpenVINO Model Server (OVMS) now supports two more additional precisions FP64 and I64 also known as "double precision" by leveraging the new OpenVINO API v2.0. - Extended API for DAG custom nodes to include initialization and cleanup steps
Added two API calls to enable additional use cases where you can initialize resources in the Directed Acyclic Graph (DAG) loading step instead of during each predict request. This makes it possible to avoid dynamic allocation during custom node execution. - Easier deployment of models with a preserved layout from training frameworks
Due to changes in the new OpenVINO API v2.0, the model layout from training frameworks like TensorFlow is preserved in OVMS. OpenVINO Model Optimizer can be instructed to save information about the model layout. - Arbitrary layout transpositions
Added support for handling any layout transformation when loading models. This will result in adding pre-processing step before inference. This is performed using --layout NCHW:NHWC to inform OVMS to accept NHWC layout and add a preprocessing step to transpose NCHW to accept data with NHWC layout. - Support for models with a batch size of arbitrary dimension
Batch size in layout can be now on any position in the model. Previously OVMS batch size was accepted only on the first dimension when changing model batch size. - New documentation on docs.openvino.ai
Documentation for OpenVINO Model Server is now available at https://docs.openvino.ai/latest/ovms_what_is_openvino_model_server.html. -
Breaking changes
- Order of reshape and layout change operations during model initialization.
- If you wanted to change a model with the original shape: (1,3,200,200), layout: NCHW to handle different layout & resolution you had to set --shape “1,3,224,224” --layout NHWC. Now both parameters should describe target values so with 2022.1 it should look like: --shape “1,224,224,3” --layout NHWC:NCHW.
- Layout parameter changes
Previously when configuring a model with the parameter –layout, administrator was not required to know what the underlying model layout is, because OV by default used NCHW. Now you inform the OVMS that model is using layout NCHW – both model is using NCHW and accepting NCHW input. - Custom nodes code must include the implementation of new API methods. It might be a dummy implementation, if not needed. Additionally, all previous API functions must include additional parameter void*.
- In the DAG pipelines configuration, demultiplexing with a dynamic number of parallel operations is configurable with the parameter “dynamic_count” set to –1 beside the 0 so far. It will be more consistent with the common conventions used e.g., in model input shapes. Using 0 is now deprecated and support for this will be removed in 2022.2.
- Order of reshape and layout change operations during model initialization.
-
Other changes
- Updated demo with question answering use case – BERT model demo with a dynamic shape and variable length of the request content
- Rearranged structure of the demos and client code examples.
- Python client code examples both with tensorflow-server-api and ovmsclient library.
- Demos updated to use models with preserved layout and color format
- Custom nodes updated to use new API. The initialization step in model zoo detection custom node uses memory buffers initialization to speed up the execution.
- Updated demo with question answering use case – BERT model demo with a dynamic shape and variable length of the request content
- Support for dynamic shape in the models
-
OpenVINO™ Security Add-on
- Support to run OVSA inside SGX enclave using Gramine Shielded Containers (GSC) on Intel® 3rd Generation Scalable Xeon Processor (Ice Lake processor) from 2021.4.2 release & now tested with Gramine version 1.1 as part of 2022.1 release
- New script to check & install pre-requisites needed to run OVSA on KVM based systems is added.
-
OpenCV* library
- OpenCV is not included in the OpenVINO toolkit by default anymore. It should be installed/downloaded separately using the download script located in "extras/scripts" or manually from storage.openvinotoolkit.org or as an additional package (for APT distribution channel).
- Version updated to 4.5.5 (SOVERSION changed to 405 according to the new scheme).
- Uses oneVPL videoio plugin instead of MediaSDK.
-
DL Streamer
- DL Streamer is not included in the OpenVINO toolkit by default anymore. It should be installed/downloaded separately, follow Installation Guide.
-
Intel® Media SDK and Intel® oneAPI Video Processing Library (oneVPL)
- Starting with the Intel® Distribution of OpenVINO™ toolkit 2021.3 release, Intel® Media SDK is being deprecated for removal in Q1’22.
- Users are recommended to migrate to the Intel oneAPI Video Processing Library (oneVPL) as the unified programming interface for video decoding, encoding, and processing to build portable media pipelines on CPUs, GPUs, and accelerators. Note the differences and changes in APIs and functionality.
- See Installation Guide, oneVPL Programming Guide for guidelines, migration guide from Intel® Media SDK to oneVPL, and API Changes Documentation for reference.
New Distributions
Since the OpenVINO™ 2022.1 release, the following development tools: Model Optimizer, Post-Training Optimization Tool, Model Downloader and other Open Model Zoo tools, Accuracy Checker, and Annotation Converter are not part of the installer. New default and recommended way to get these tools is to install them via 'pip install openvino-dev'.
- PyPI
- Added support for Python 3.9
- OpenVINO Runtime packages are marked by manylinux_2_27 tag to be compliant with any Linux platforms where GLIBC >2.27. See more here: https://www.python.org/dev/peps/pep-0600/
- Conda
- Added support for Python 3.8 and Python 3.9
- Containers
- The native support of iGPU inference inside Linux-based OpenVINO Docker container running under WSL2 on Windows host is available since the 2022.1 release.
- Open Model Zoo demos and OpenCV are no longer distributed inside Docker images.
- Docker images with included DL Streamer (data_dev and data_runtime) are no longer available as part of OpenVINO since this release and will be distributed separately.
- CentOS 7 based Docker images and Dockerfiles are no longer supported since this release.
Known Issues
|
Jira ID |
Description |
Component |
Workaround |
---|---|---|---|---|
1 | 24101 | Performance and memory consumption may be bad if layers are not 64-bytes aligned. |
GNA plugin | Try to avoid the layers which are not 64-bytes aligned to make a model GNA-friendly. |
2 | 33132 | [IE CLDNN] Accuracy and last-tensor checks regressions for FP32 models on ICLU GPU |
clDNN Plugin | |
3 | 42203 |
Customers from China may experience some issues |
OMZ | Use a branch https://github.com/openvinotoolkit/open_model_zoo/tree/release-01org with links to old storage download.01.org |
4 | 24757 | The heterogeneous mode does not work for GNA | GNA Plugin | Split the model to run unsupported layers on CPU |
5 | 80009 | Transition to IRv11 includes into model execution additional operations: transpose and data conversion on input and transpose on output |
CPU plugin | Previously, these additional operations were executed outside performance counters. As a result, common performance has not changed. |
6 | 58806 | For models after POT, the memory consumption and performance may be worse than for original models (i.e., using an internal quantization algorithm) |
GNA Plugin | Do not use POT if the accuracy is satisfying |
7 | 44788 | The heterogeneous mode does not work for GNA | GNA Plugin | Split the model to run unsupported models on CPU |
8 | 78822 | The GNA plugin overhead may be unexpectedly great | GNA Plugin | N/A |
9 | 80699 | LSTM sequence models are implemented using tensor iterator. The solution is used to improve FIP and required memory. Performance degradations are expected |
GPU Plugin | LTSM sequence is processed via tensor iterator which resulted in first inference latency improvements and memory usage decrease. Some performance degradations are expected |
10 | 73896 | Bug in synchronization of TopK with 2 outputs | GPU Plugin | Sporadic accuracy results on models with TopK with two outputs are possible |
11 | 80826 | [HDDL] deeplabv3 fails on HDDL-R with RuntimeError: [GENERAL_ERROR] AssertionFailed: hddlBlob->getSize() >= offset + size |
HDDL Plugin | Fix will be available in the next release. You can try to use the 2021.4.2 LTS release to run this model |
12 | 68801 | HDDL Daemon can hang after several resets of myriadx cores. | HDDL Plugin | Try to press Enter or Esc a few times in a terminal. It looks like a problem in cmd, and is probably fixed by disabling QuickEdit Mode in the command line options. |
13 | 80827 | Performance drop can be caused by additional preprocessing | Myriad Plugin, HDDL Plugin | Try to pass fp16 input to models. |
14 | 71762 | Some operations cannot be assigned to any device due to issue in the process of constant folding |
Hetero Plugin | Use affinity property to manually split the graph to devices |
15 | 69529 | Myriad Plugin can report an error if plugin cache is used. | Myriad Plugin | Do plugin cache reset after using of plugin instance |
16 | 78769 | Myriad Plugin can report an error if plugin cache is used. | Myriad Plugin | Do plugin cache reset after using of plugin instance |
17 | 80627 | Errors caused by missed output nodes names in graph during quantization. May appear for some models only for IRs converted from ONNX models using new frontend (which is default from 2022.1 release): `Output name: <name> not found` `Output node with <name> is not found in graph` |
Model Optimizer | Use legacy MO frontend to convert model to IR by providing --use_legacy_frontend option |
18 | 79081 | Errors caused by missed output nodes names in graph during quantization. May appear for some models only for IRs converted from ONNX models using new frontend (which is default from 2022.1 release): `Output name: <name> not found` `Output node with <name> is not found in graph` |
Model Optimizer | Use legacy MO frontend to convert model to IR by providing --use_legacy_frontend option |
19 | 79833 | The new ONNX FE uses output names as friendly names when importing ONNX models to OV. In a very limited number of cases some users who depend on the friendly names might be affected. |
ONNX FE | N/A |
20 | 54460 | Online package install issue in PRC Region | Install | In case of connectivity issues at installation PRC customers are advised to use local repositories mirrors, such as https://developer.aliyun.com/mirror/. |
21 | 78604 | Benchmark_app command line might not accept all input/output tensor names in case of multiple names present |
IE Samples | If some tensor's name from the list cannot be used, use other name frm the list, or corresponding node name |
22 | 82015 |
Models compiled with dynamic shapes may show worse performance |
CPU Plugin | Recommendation is to reshape model setting upper bounds for dynamic shapes or split the input into several parts. Another approach is to upgrade HW by adding more RAM. Finally, you can use static models instead of dynamic. Read more about dynamism in the documentation. |
Included in This Release
The Intel® Distribution of OpenVINO™ toolkit is available for download for three types of operating systems: Windows*, Linux*, and macOS*.
Component |
License |
Location |
Windows |
Linux |
macOS |
---|---|---|---|---|---|
OpenVINO™ (Inference Engine) C++ Runtime Unified API to integrate the inference with application logic OpenVINO™ (OpenVINO Runtime) Headers |
EULA Apache 2.0 |
|
YES | YES | YES |
OpenVINO™ (Inference Engine) Python API |
EULA |
<install_root>/python/* |
YES | YES | YES |
OpenVINO™ (Inference Engine) Samples Samples that illustrate OpenVINO™ C++/ Python API usage |
Apache 2.0 | <install_root>/samples/* | YES | YES | YES |
OpenCV* library OpenCV Community version compiled for Intel® hardware |
Apache 2.0 |
This is not part of the packages Library will be installed here: <install_root>/extras/opencv/* |
NO | NO | NO |
Compile Tool Compile tool is a C++ application that enables you to compile a network |
EULA | <install_root>/tools/compile_tool/* | YES | YES | YES |
Deployment manager The Deployment Manager is a Python* command-line tool |
Apache 2.0 | <install_root>/tools/deployment_manager/* | YES | YES | YES |
Where to Download This Release
The OpenVINO download configurator provides the easiest access to the right download link that match your desired tools/runtime, OS, version & distribution options.
This provides access to the following options and more:
- pypi.org
- GitHub
- DockerHub*
- Red Hat* Quay.io (started from 2021.2 release)
-
DockerHub CI
DockerHub CI framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
You can add your layer and customize the image of OpenVINO™ for your needs. You can reuse available Dockerfiles. - Anaconda* Cloud
- In addition, Intel® Distribution of OpenVINO™ toolkit for Linux* is available to install through the
Helpful Links
NOTE: Links open in a new window.
All Documentation, Guides, and Resources
Legal Information
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http://www.intel.com/ or from the OEM or retailer.
No computer system can be absolutely secure.
Intel, Atom, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
*Other names and brands may be claimed as the property of others.
Copyright © 2022, Intel Corporation. All rights reserved.
For more complete information about compiler optimizations, see our Optimization Notice.