Introduction
The Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud.
The Intel Distribution of OpenVINO toolkit:
- Enables deep learning inference from the edge to cloud.
- Supports heterogeneous execution across Intel accelerators, using a common API for the Intel® CPU, Intel® Integrated graphics, Intel® Gaussian & Neural Accelerator (Intel® GNA), Intel® Movidius™ Neural Compute Stick (NCS), Intel® Neural Compute Stick 2 (Intel® NCS2), Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
- Speeds time-to-market through an easy-to-use library of CV functions and pre-optimized kernels.
- Includes optimized calls for CV standards, including OpenCV and OpenCL™.
New and Changed in Release 2020.3.2 LTS
Executive summary
- This 2020.3.2 LTS release provides bug fixes for the previous 2020.3.1 Long-Term Support (LTS) release, a new release type that provides longer-term maintenance and support with a focus on stability and compatibility. Read more about the long-term support and maintenance, go to the Long Term Support Policy.
-
Based on 2020.3.1 LTS, the 2020.3.2 LTS release includes security and functionality bug fixes, and minor capability changes.
-
Learn more about what components are included in the LTS release in the Included into this release section. Note specific fixes to the known issues to the Inference Engine MYRIAD, HDDL, and FPGA plugins, and the Deep Learning Workbench.
- Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS releases will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, talk to your sales representative or contact us to get the latest FPGA updates.
- Download the 2020.3.2 LTS release of the Intel® Distribution of OpenVINO™ toolkit to upgrade to the latest LTS release.
- Because of external dependencies to CMake, users are recommended to download CMake 3.10 or higher for Linux and CMake 3.14 or higher for Windows. Go to System Requirements to learn more.
Model Optimizer
Model Optimizer
- There are no changes compared with the 2020.3.1 LTS release
Inference Engine
Inference Engine Developer Guide
- Common changes:
- Resolved Restriction of MSVC 2019 compiler typedef declaration in Windows build
- MYRIAD plugin:
- Fixed some issues related to Proposal operation calculation on the device side.
- Added config option to enable/disable async DMA.
- HDDL plugin:
- Same features and fixes as in the Myriad plugin.
- Fixed a problem when VPU got stuck and reset when running with multiple networks on one VPU
- FPGA plugin:
- Fixed error for mobilenet models due to missing prototxt file
- FPGA plugin now requires GCC 5.4.0
- Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS releases will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, talk to your sales representative or contact us to get the latest FPGA updates.
Deep Learning Workbench
- Fixed critical security vulnerability caused by outdated PyYAML version in Workbench
OpenCV*
- There are no changes compared with the 2020.3.1 LTS release
Examples and Tutorials
- There are no changes compared with the 2020.3.1 LTS release
Open Model Zoo
- There are no changes compared with the 2020.3.1 LTS release
Deep Learning Streamer
- There are no changes compared with the 2020.3.1 LTS release
New Distributions
Containers:
- New CentOS 7 runtime Docker image is available on DockerHub container registry
- Includes Inference Engine and OpenCV
- Supports CPU, GPU, Myriad* (NCS2), and HDDL devices
New and Changed in Release 2020.3.1 LTS
Executive Summary
- This release provides bug fixes for the previous 2020.3 Long-Term Support (LTS) release, a new release type that provides longer-term maintenance and support with a focus on stability and compatibility. Read more about the support details: Long Term Support Release
-
Based on v.2020.3 LTS, the v.2020.3.1 LTS release includes security and functionality bug fixes, and minor capability changes.
- Includes improved support for 11th Generation Intel® Core™ Processor (formerly codenamed Tiger Lake), which includes Intel® Iris® Xe Graphics and Intel® DL Boost instructions.
-
Learn more about what components are included in the LTS release in the Included into this release section.
- Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS releases will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, talk to your sales representative or contact us to get the latest FPGA updates.
Model Optimizer
Model Optimizer
- 38288 Fixed problem where MO fails to convert TensorFlow* models in version 2020.2 but converted successfully in an older version, 2019.R3.379
Inference Engine
Inference Engine Developer Guide
- Common changes:
- Restored compilation with older versions of the Intel® Threading Buildings Blocks (TBB) without NUMA support
- Bug fixes:
- 38306 Fixed --mean_file option to properly function with Caffe* models
- 38297 Fixed bug to load and infer Temporal Shift Module (TSM) model where the Intermediate Representation (IR) file had failed to load
- CPU Plugin:
- 37282 Fixed inference accuracy drop from approximately 97% to 65%
- 39441 Fixed memory leakage for MKLDNNQuantizeNode::appendPostOps
- 39618 Fixed bug with ie.load_network() resulting in discrepancy on the CPU
- GPU Plugin:
- Common changes:
- Support for 11th Generation Intel® Core™ Processor Family (formerly codenamed Tiger Lake)
- Moved integrated GPU to the first position in GPU device map enabling the DP4A instruction set query to work properly with the new driver
- Bug fixes:
- 36654 Fixed bug where performance regression is seen when using Intel® DL Boost due to a change in the driver device string
- 38296 Fixed Output inconsistencies between CPU and GPU inference
- Common changes:
- MYRIAD Plugin:
- 39234 Fixed bug in cases where CPU utilization was high when using the MYRIAD plugin
- HDDL Plugin:
- 38881 Corrected SMBus driver configuration file to recognize HDDL devices
- 38295 Fixed bug with the HDDL xLink limiting the Intel® Movidius™ Myriad™ X VPU device to connect more than 32 devices
- FPGA Plugin:
- Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, talk to your sales representative or contact us to get the latest FPGA updates.
- Users of Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA are recommended to upgrade firmware and bitstreams.
- Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
- The Intel® Acceleration Stack for FPGAs is updated from version 1.2 to 1.2.1 to be in compliance with Intel IPAS security standards. Refer to release notes for Intel® Acceleration Stack for FPGAs.
- A firmware update is required to upgrade to Intel® Acceleration Stack for FPGAs version 1.2.1. To identify current firmware consult these instructions. Update instructions are available in the Intel Acceleration Stack Quick Start Guide for Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
- As a result, compatible bitstreams to the updated Intel® Acceleration Stack for FPGAs version 1.2.1 can be identified by naming convention 2020-3-1_*.
- Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA Speed Grade 2
- No changes
- nGraph:
-
38287 Fixed IE in version 2020.2 producing incorrect results when compared to version 2019.3
-
- Deployment Manager:
- 38792 Fixed bug where deployment_manager.py on Windows* OS fails to produce the dynamic link library file (.dll) for device type MYRIAD (i.e., for Intel® Movidius™ Myriad™ X VPU)
OpenCV*
- There are no changes compared with the 2020.3 LTS release
Examples and Tutorials
- There are no changes compared with the 2020.3 LTS release
Open Model Zoo
- There are no changes compared with the 2020.3 LTS release
Deep Learning Streamer
- There are no changes compared with the 2020.3 LTS release
New and Changed in Release 2020.3 LTS
Executive Summary
- Introducing Long-Term Support (LTS), a new release type that provides longer-term maintenance and support with a focus on stability and compatibility. Read more: Long Term Support Release
- These release notes were introduced to support the initial version of LTS release. All updates for this release will be published on this page.
- Intel Distribution of OpenVINO toolkit v.2020.3 LTS is based on Intel Distribution of OpenVINO toolkit v.2020.2 and includes security, functionality bug fixes, and minor capability changes.
- Learn more about what components are included in the LTS release in the Included into this release section.
- List of Deprecated API, API Changes
- IRv7 is deprecated and support for this version may be removed as early as v.2021.1 release this year.
Model Optimizer
Model Optimizer
- Included an upgrade notice to enable users to easily identify if a newer version of Intel Distribution of OpenVINO toolkit is available for download.
Inference Engine
Inference Engine Developer Guide
Common changes
- Switched to the latest and official version of Threading Building Blocks 2020 Update 2. Added scalable equivalent of memory allocator that makes it possible to automatically replace all calls to standard functions for dynamic memory allocation. These changes can improve application performance and decrease application memory footprint.
CPU Plugin
CPU Plugin
- Bug fixes:
- 29082 Fixed possible IE pipeline crash if it is running in parallel threads
- 28224 Fixed TF deeplab_v3 performance deviation on CPU in INT8 mode
- 26373 Fixed TF GNMT performance drop compared to v.2019 R3
- 25895 Fixed performance degradation for model 'googlenet-v4' IE INT8 when comparing against IE INT8 with streams
- 29040 Fixed CAFFE yolo_v1_tiny performance deviation CPU INT8
GPU Plugin
GPU Plugin
- Bug fixes:
- 25657 Fixed possible memory leaks in the GPU plugin in case of multiple network loading and unloading cycles
- 25087 Fixed performance degradations in the GPU plugin on MobileNet* models and similar models.
- 29414 Fixed asl-recognition-0004 accuracy degradation
MYRIAD Plugin
- Aligned VPU firmware with Intel® Movidius™ Myriad™ X Development Kit (MDK) R11 release.
- To rebuild firmware with MDK R11, users need to change MDK FathomKey project makefile and add "-falign-functions=64" option to MVCCOPT variable. Other than this build option change, the 2020.3 release firmware binary is identical to the MDK R11 source. Without this build option, the fix firmware will be identical to OpenVINO 2020.2 release binary.
- The Intel Movidius Neural Compute Stick (NCS) will be supported in this LTS release, according to LTS policy, NCS stick support will be stopped in the next release (2020.4), but will continue to be available for 2020.3 LTS release updates.
- Intel Movidius Neural Compute Stick (NCS) has been replaced with the Intel Neural Compute Stick 2 (Intel NCS2). Developers still working with the current Intel NCS can review guidance for transitioning to other platforms from the Intel Movidius Neural Compute Stick. Technical support for the Intel Movidius Neural Compute Stick will continue to be available until April 30, 2021. For your reference, additional information is available at the links below:
HDDL Plugin
HDDL Plugin
- Included security bug fixes. Users should update to this version.
FPGA Plugin
FPGA Plugin
- Introduced support for Windows* OS platform. Intel Vision Accelerator Design with an Intel Arria 10 FPGA (Mustang-F100-A10) Speed Grade 2 and Intel® Programmable Acceleration Card (Intel® PAC) with Intel® Arria® 10 GX FPGA (Intel® PAC with Intel® Arria® 10 GX FPGA) are now supported.
- The environment variable, CL_CONTEXT_COMPILER_MODE_INTELFPGA, is no longer required. It should not be set by the user.
OpenCV*
- OpenCV 4.3.0 including bug fixes.
Examples and Tutorials
- Enable users to run end-to-end speech demo (which was previously excluded in the v.2020.2 release).
Open Model Zoo
- Introduced a streamlined process to enable users to quantize public models to a lower precision for improved performance. Quantization of several public classification models, trained on ImageNet dataset enabled through using OMZ quantizer.py script. The script will call OpenVINO Post-training optimization toolkit with necessary parameters to produce quantized IR. The pre-requisite is ImageNet dataset. See more details on how to use OMZ quantizer script with OMZ documentation.
Deep Learning Streamer
- Included security bug fixes. Users should update to this version.
Known Issues
JIRA ID | Description | Component | Workaround |
---|---|---|---|
25358 | Some performance degradations are possible in the GPU plugin on GT3e/GT4e/ICL NUC platforms | IE GPU Plugin | N/A |
24709 | Retrained TensorFlow* Object Detection API RFCN model has significant accuracy degradation. Only the pretrained model produces correct inference results. | All | Use Faster-RCNN models instead of RFCN model if retraining of a model is required. |
23705 | Inference may hang when running the heterogeneous plugin on GNA with fallback on CPU. | IE GNA Plugin | Do not use async API when using CPU/GNA heterogeneous mode. |
23705 | Inference may hang when running the heterogeneous plugin on GNA with fallback on CPU. | IE GNA Plugin | Do not use async API when using CPU/GNA heterogeneous mode. |
26129 | TF YOLO-v3 model fails on AI Edge Computing Board with Intel Movidius Myriad X C0 VPU, MYDX x 1 | IE MyriadX plugin | Use other versions of the YOLO network or USB connected device (Neural Compute Stick 2) |
22108 | Stopping the app during firmware boot might cause device hang for Intel Neural Compute Stick 2 (Intel NCS2) | IE MyriadX plugin | Do not press Ctrl+C while the device is being booted. |
28747 | CPU plugin does not work on Windows system with CPUs less then AVX2 instruction set (Intel Atom® processors) | IE CPU Plugin | Manually rebuild the CPU plugin from sources available in the public repository with CMake feature flags ENABLE_AVX2=OFF and ENABLE_AVX512=OFF. |
The nGraph Python* API has been removed from this release as it doesn't meet public release quality standards due to its incompleteness. It will be added back once it meets public release quality standards. This removal does not impact the nGraph C++ API. Users may still use the C++ API. | IE Python API | Use C++ API. | |
28970 | TF faster-rcnn and faster-resnet101 topologies accuracy deviation on MYRIAD | IE MyriadX plugin, IE HDDL plugin | For accurate inference on these topologies either use the other HW (i.e. CPU/GPU), or use previous release of Intel Distribution of OpenVINO toolkit on Intel Neural Compute Stick 2 (Intel NCS2). |
25723 | TF rfcn_resnet101_coco low accuracy on dataset | IE MyriadX plugin, IE HDDL plugin | For accurate inference on this topology either use the other HW (i.e. CPU/GPU), or use previous release of Intel Distribution of OpenVINO toolkit on Intel Neural Compute Stick 2 (Intel NCS2). |
32036 | (T)Blobs shared-ptrs issues like double-free for the huge models | All | N/A |
30569 | Multiply layer with non zero offset not properly handled. | IE GNA Plugin | N/A |
31719 | Supported multiple output resulted to creating a lot of activation layers. | IE GNA plugin | N/A |
31720 | Not supported cascade concat with non functional layers between concats | IE GNA plugin | N/A |
Included in This Release
The Intel Distribution of OpenVINO toolkit is available in these versions:
- Intel Distribution of OpenVINO toolkit for Windows
- Intel Distribution of OpenVINO toolkit for Windows with FPGA Support
- Intel Distribution of OpenVINO toolkitfor Linux*
- Intel Distribution of OpenVINO toolkit for Linux* with FPGA Support
- Intel Distribution of OpenVINO toolkit for macOS*
Component | License | Location | Windows | Windows for FPGA | Linux | Linux for FPGA | macOS | Components coverage by LTS policy |
---|---|---|---|---|---|---|---|---|
Deep Learning Model Optimizer Model optimization tool for your trained models. | Apache 2.0 | <install_root>/deployment_tools/model_optimizer/* | YES | YES | YES | YES | YES | YES |
Deep Learning Inference Engine Unified API to integrate the inference with application logic Inference Engine Headers | EULA Apache 2.0 |
<install_root>/deployment_tools/inference_engine/* <install_root>/deployment_tools/inference_engine/include/* |
YES | YES | YES | YES | YES | YES |
OpenCV library OpenCV Community version compiled for Intel hardware | BSD | <install_root>/opencv/ | YES | YES | YES | YES | YES | NO |
Intel® Media SDK libraries (open source version) Eases the integration between the Intel Distribution of OpenVINO toolkit and the Intel Media SDK. |
MIT | <install_root>/../mediasdk/* | NO | NO | YES | YES | NO | NO |
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver™ Improves usability | MIT |
<install_root>/install_dependencies/ install_NEO_OCL_driver.sh - helps to install OpenCL Runtime, default location /usr/local/lib/ intel-opencl_*.deb - driver for Ubuntu* intel-opencl_*.rpm - driver for CentOS* intel-* - driver's dependencies |
NO | NO | YES | YES | NO | YES |
Intel® FPGA Deep Learning Acceleration Suite (Intel® FPGA DL Acceleration Suite), including pre-compiled bitstreams Implementations of the most common CNN topologies to enable image classification and ease the adoption of FPGAs for AI developers. Includes pre-compiled bitstream samples for the Intel® Programmable Acceleration Card with Intel Arria 10 GX FPGA and Intel Vision Accelerator Design with an Intel Arria 10 FPGA (Mustang-F100-A10) Speed Grade 1 and Speed Grade 2 |
Intel OBL FPGA SDK |
<install_root>/bitstreams/a10_dcp_bitstreams/* <install_root>/bitstreams/a10_vision_design_sg2_bitstreams/* |
NO | YES | NO | YES | NO | YES |
Intel® FPGA SDK for OpenCL™ software technology The Intel FPGA RTE for OpenCL provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files | Intel OBL FPGA SDK |
/opt/altera/aocl-pro-rte/* <user>/intelFPGA_pro/<version>/* |
NO | YES | NO | YES | NO | YES |
Intel Distribution of OpenVINO toolkit documentation Developer guides and other documentation. | Available from the Intel Distribution of OpenVINO™ toolkit product site, not part of the installer packages. | NO | NO | NO | NO | NO | NO | |
Open Model Zoo Documentation for models; Models in binary form can be downloaded using Model Downloader | Apache 2.0 | <install_root>/deployment_tools/open_model_zoo/* | YES | YES | YES | YES | YES | NO |
Inference Engine Samples Samples that illustrate Inference Engine API usage and demos that demonstrate how you can use features of Intel Distribution of OpenVINO toolkit in your application | Apache 2.0 | <install_root>/deployment_tools/inference_engine/samples/* | YES | YES | YES | YES | YES | NO |
Deep Learning Workbench Tool that can help developers to run Deep Learning models through the OpenVINO toolkit Model Optimizer, convert models into INT8 precision, finetune them, run inference, and measure accuracy. | EULA | <install_root>/deployment_tools/tools/workbench/* | YES | YES | YES | NO | YES | YES |
ngraph - open source C++ library, compiler and runtime for Deep Learning nGraph | Apache 2.0 | <install_root>/deployment_tools/ngraph/* | YES | YES | YES | YES | NO | YES |
Post-Training Optimization Tool designed to convert a model into a more hardware-friendly representation by applying specific methods that do not require re-training, for example, post-training quantization. | EULA | <install_root>/deployment_tools/tools/post_training_optimization_toolkit/* | YES | YES | YES | YES | YES | YES |
Speech Libraries and End-to-End Speech Demos | GNA Software License Agreement | <install_root>/data_processing/audio/speech_recognition/* | YES | YES | YES | YES | NO | NO |
DL Streamer | EULA | <install_root>/data_processing/dl_streamer/* | NO | NO | YES | YES | NO | NO |
Where to Download This Release
- Intel® Software Development Products Registration Center
- DockerHub*
-
DockerHub CI framework can generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit.
You can add your layer and customize the image of OpenVINO™ for your needs. You can reuse available Dockerfiles. - Anaconda* Cloud (
conda install -c intel openvino-ie4py
) - In addition, Intel® Distribution of OpenVINO™ toolkit for Linux* is available to install through the
- APT repository
- YUM repository
System Requirements
Intel CPU processors with corresponding operating systems
Intel Atom processor with Intel SSE4.1 support
Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
6th - 11th generation Intel® Core™ processors
Intel® Xeon® processor E3, E5, and E7 family (formerly Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
Operating Systems:
- Ubuntu 16.04 long-term support (LTS), 64-bit
- Ubuntu 18.04 long-term support (LTS), 64-bit
- Windows® 10, 64-bit
- macOS 10.14, 64-bit
Intel® Processor Graphics with corresponding operating systems (GEN Graphics)
Intel HD Graphics
Intel® UHD Graphics
Intel® Iris® pro graphics
Operating Systems:
- Ubuntu 18.04 long-term support (LTS), 64-bit
- Windows 10, 64-bit
- Yocto 3.0, 64-bit
Note This installation requires drivers that are not included in the Intel Distribution of OpenVINO toolkit package.
Note A chipset that supports processor graphics is required for Intel Xeon processors. Processor graphics are not included in all processors. See Product Specifications for information about your processor.
Intel® Gaussian & Neural Accelerator (Intel® GNA)
Operating Systems:
- Ubuntu 18.04 long-term support (LTS), 64-bit
- Windows 10, 64-bit
FPGA processors with corresponding operating systems
Operating Systems:
- Ubuntu 18.04 long-term support (LTS), 64-bit
- Windows 10, 64-bit
VPU processors with corresponding operating systems
Intel Vision Accelerator Design with Intel Movidius™ Vision Processing Units (VPU) with corresponding operating systems
Operating Systems:
- Ubuntu 18.04 long-term support (LTS), 64-bit (Linux Kernel 5.2 and below)
- Windows 10, 64-bit
- CentOS 7.4, 64-bit
Intel Movidius Neural Compute Stick (Intel® NCS) and Intel® Neural Compute Stick 2 (Intel® NCS2) with corresponding operating systems
Operating Systems:
- Ubuntu 18.04 long-term support (LTS), 64-bit
- CentOS 7.4, 64-bit
- Windows 10, 64-bit
- Raspbian* (target only)
AI Edge Computing Board with Intel Movidius Myriad X C0 VPU, MYDX x 1 with corresponding operating systems
Operating Systems:
- Windows 10, 64-bit
Components Used in Validation
Operating systems used in validation:
- Linux* OS
- Ubuntu 16.04.6 with Linux kernel 4.15
- Ubuntu 18.04.3 with Linux kernel 5.3
- Ubuntu 18.04.3 with Linux kernel 5.6 for 10th Generation Intel® Core™ Processors (formerly codenamed Ice Lake ) and 11th Generation Intel® Core™ Processor Family for Internet of Things (IoT) Applications (formerly codenamed Tiger Lake)
- CentOS 7.4 with Linux kernel 5.3
- Intel® Graphics Compute Runtime. Required only for GPU.
- 19.41 by default
- 20.35 for 10th Generation Intel® Core™ Processors (formerly codenamed Ice Lake ) and 11th Generation Intel® Core™ Processor Family for Internet of Things (IoT) Applications (formerly codenamed Tiger Lake)
- Windows 10 version 1809 (known as Redstone 5)
- OS X 10.14
- Raspbian 9
DL frameworks used for validation:
- TensorFlow 1.14.0 and 1.15.2
- Apache MxNet* 1.5.1
Helpful Links
All Documentation, Guides, and Resources
Legal Information
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at https://www.intel.com/ or from the OEM or retailer.
No computer system can be absolutely secure.
Intel, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
*Other names and brands may be claimed as the property of others.
Copyright © 2020, Intel Corporation. All rights reserved.