Intel® Distribution of OpenVINO™ Toolkit Release Notes

ID 780177
Updated 3/26/2019
Version
Public

A newer version of this document is available. Customers should click here to go to the newest version.

author-image

By

Introduction

NOTE: The Intel® Distribution of OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK

The Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNNs), the toolkit extends CV workloads across Intel® hardware, maximizing performance.

The Intel® Distribution of OpenVINO™ toolkit:

  • Enables CNN-based deep learning inference on the edge.
  • Supports heterogeneous execution across Intel CV accelerators, using a common API for the CPU, Intel® Integrated Graphics, Intel® Movidius™ Neural Compute Stick (NCS), Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ VPUs and Intel® FPGAs.
  • Speeds time-to-market through an easy-to-use library of CV functions and pre-optimized kernels.
  • Includes optimized calls for CV standards, including OpenCV*, OpenCL™, and OpenVX*.

New and Changed in the OpenVINO™ 2018 R5.0.1 Release

Intel® Distribution of OpenVINO™ toolkit 2018 R5.0.1 includes functional and security updates. Users should update to the latest version.

New and Changed in the OpenVINO™ 2018 R5 Release

Model Optimizer

Common changes

  • Added support for 1D convolutions in all supported frameworks.
  • Updated pre-built protobuf python packages for Windows* host to version 3.6.1.
  • Fixed the Model Optimizer crashes related to networkX library incompatibility between 1.X and 2.Y versions.
  • Model Optimizer optimizations to faster convert model with many output nodes.
  • Removed 'axis' and 'num_axes' attributes from the Reshape layer parameters.
  • Improved Model Optimizer error messages.
  • The IR version is increased from 3 to 4. The IR of version 2 can be generated using the “--generate_deprecated_IR_V2” command line parameter.

ONNX*

  • Added support of the following ONNX* operations: Gather, Gemm, ReduceSum, GlobalMaxPool, Neg, Pad which is not fuse-able to convolution.
  • Model Optimizer converts publicly available models generated with the PaddlePaddle* to ONNX* convertor.

TensorFlow*

  • Added support of the following TensorFlow* operations: Gather, GatherV2, ResourceGather, Sqrt, Square, full support of ResizeBilinear, ReverseSequence near the LSTM loop, Pad/PadV2/MirrorPad which are not fuse-able to convolution.
  • Added support of the following TensorFlow* topologies: VDCNN, Unet, A3C, DeepSpeech, lm_1b, lpr-net, CRNN, NCF, RetinaNet, DenseNet, ResNext.
  • Added support for Reverse and Bi-directional forms of LSTM loops in the TensorFlow* models.
  • Added ability to load TensorFlow* model from sharded checkpoints.
  • Fixed bug with conversion of the TensorFlow* model with Split/Unstack operations where not all output tensors are used.

Caffe*

  • Added support of the following Caffe* operations: ShuffleChannel, Axpy, BN, Scale with two inputs.
  • Added support of the following Caffe* topologies: Squeeze-and-Excitation Networks (SE-BN-Inception, SE-Resnet-101, SE-ResNet-152, SE-ResNet-50, SE-ResNeXt-101, SE-ResNeXt-50), ShuffleNet v2.
  • Removed legacy entry point to the Model Optimizer for Caffe* - "<INSTALL_DIR>/deployment_tools/model_optimizer/ModelOptimizer".

MXNet*

  • Added support of the following MXNet* operations: stack, swapaxis, zeros, rnn, rnn_param_concut, slice_channel, _maximum, _minimum, max, InstanceNorm, Pad which is not fuse-able to convolution.
  • Added support of the following MXNet* topologies: mtcnn_l, Lightened_Moon, RNN-Transducer.
  • Added support for 3D convolutions, deconvolutions and poolings for MXNet* models.
  • Added support for One-directional forms of LSTM with one LSTM layer for the MXNet* models.

Inference Engine

Common changes

  • Introduced preview support for the NN Builder API — the feature providing an ability to create a graph using a runtime API only, without need to load an IR.
  • Introduced preview support for Ubuntu* 18.04, Yocto* Poky* 2.5.
  • Introduced preview support for Raspbian* 9 as a host for Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2 targets.
  • Python* API support is now gold. This means that it is validated on all supported platforms, and the API is not going to be modified. New samples were added for the Python API.
  • Fixed paddings for Convolution, Deconvolution, Pooling after Shape Inference in all the plugins.
  • Corrected Shape Inference for Reshape/Flatten layers.
  • Names of Debug DLLs on Windows* are extended with the additional postfix "d".
  • C++ version of the Cross Check Tool has been deprecated.

CPU plugin

  • Updated Intel® MKL-DNN version to v0.17.
  • Improved support for Low-Precision 8-bit Integer Inference:
    • It is now supported on platforms having Intel® Advanced Vector Extensions 2.0 (Intel® AVX2) or Intel® Streaming SIMD
    • Extensions 4.2 (Intel® SSE4.2) instruction sets.
  • The number of layers executed on low precision provides optimized execution for a broader set of networks, such as DenseNet family.
  • Introduced preview support for LSTM networks (see the Model Optimizer section for details).
  • Introduced support for models based on 3D convolutions.
  • Introduced support for streams, a new mechanism for parallelization on CPU. Streams allow you to get the maximum throughput, especially on multi-core servers.
  • Introduced support for Gemm, Gather and Pad layers.
  • Fixed Deconvolution layer for the case when kernel >= pad.
  • Fixed dynamic batch support for Tile layer in the case of axis != 0.
  • Fixed support for bias broadcast for Convolution layer.
  • Fixed the accuracy for MVN layer.
  • Fixed paddings handling for Deconvolution and Pooling layers.

GPU plugin

  • Introduced support for Tile and Pad layers.
  • Stability improvements: fixed crashes on Const, Split and PReLU and Concat layers layers for some topologies.
  • Removed a significant memory leak.

FPGA Plugin

  • New DLA 5.0 bitstreams which include PReLU primitive enabled by default.
  • Enabled the MTCNN-R topology.

MYRIAD Plugin

  • Added the 'VPU_PLATFORM' configuration option to explicitly specify a device platform for Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2 to run inference.
  • Added the Import/Export API.
  • Improved batch support
  • Support for Pad layer
  • Support for Resample layers
  • Stability and accuracy fixes and improvements

HDDL Plugin

  • Introduced a new plugin to enable inference of neural networks on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs

GNA plugin

  • Introduced support for Crop and Copy layers.
  • Introduced the Async mode of inference.

Samples and Demos

There are new and updated demos and samples implemented to support new and updated pre-trained models delivered with the product.

New samples

  • text_detection_demo 
  • perfcheck sample

Updated samples

  • smart_classroom_demo 
  • crossroad_camera_demo 
  • security_barrier_camera_demo 
  • super_resolution_demo

OpenCV*

  • Added support of FPGA target for deep learning networks using the Inference Engine backend.
  • Compiled for Raspbian* 9 OS (ARM* CPU) including python2, python3 bindings, GStreamer* and GTK* support.

OpenVX*

  • OpenVX samples switched to the newest version of the OpenCV.
  • Added LUT3D support for 3-channel to 1-channel color conversions

Examples and Tutorials

Open Model Zoo

Extended the Open Model Zoo, which includes additional CNN pre-trained models and pre-generated Intermediate Representations (.xml + .bin):

  • face-reidentification-retail-0095: Lightweight model based on Mobilenet V2 backbone for face re-identification. Replaces the previous version and provides better accuracy in the pairwise re-identification test.
  • single-image-super-resolution-0063: Single image super resolution network, enhances the resolution of the input image by a factor of 4. Replaces the previous version.
  • person-attributes-recognition-crossroad-0200: Person and action detection model for Smart Classroom scenario, extended to recognize clothes color. Replaces the previous version.
  • person-detection-action-recognition-0004: Person and action detection model for Smart Classroom scenario. Replaces the previous version and provides better accuracy. 
  • [NEW] text-detection-0001: Text detector for indoor/outdoor scenes based on PixelNet architecture with MobileNetV2 as a backbone.
  • [NEW] single-image-super-resolution-1011: Single image super resolution network, enhances the resolution of the input image by a factor of 4. Adopted for 480x270 input so that output is 1080p. Faster than -0063 and better in terms of memory consumption.
  • [NEW] single-image-super-resolution-1021: Single image super resolution network, enhances the resolution of the input image by a factor of 3. Adopted for 640x360 input so that output is 1080p. Faster than -0063 and better in terms of memory consumption.

Computer Vision Algorithms (CVA)

CVA component now includes 4 more pre-built algorithms. All algorithms are capable of running on either a CPU or GPU. The additional algorithms are:

  • Vehicle/License Plate Detection: Detects vehicles and (Chinese) license plates in Road Barrier scenario.
  • Person Attributes Recognition: Includes a CNN model pre-trained for person attributes classification. Fine-tuned for Crossroad scenario.
  • Vehicle Attributes Recognition: Includes a CNN model pre-trained for vehicle attributes (type, color) classification. Fine-tuned for Road Barrier scenario.
  • (Chinese) License Plate Recognition: Includes a CNN model pre-trained for (Chinese) License Plate Recognition. Fine-tuned for Road Barrier scenario.

Model Downloader

Model Downloader configuration file is extended to support the following public models in Caffe* and TensorFlow* formats:

Model Format

license-plate-recognition-barrier-0007
(released as a part of the TensorFlow* Toolbox)

TensorFlow
se-inception Caffe
se-resnet-101 Caffe
se-resnet-152 Caffe
se-resnet-50 Caffe
se-resnext-50 Caffe
se-resnext-101 Caffe
Sphereface Caffe

 

New and Changed in the OpenVINO™ 2018 R4 Release

Model Optimizer

ONNX*

  • Added support of the following ONNX* operations:
    • Constant_fill (constant propagation)
    • Mul/Add (added ‘axis’ support)
    • Gather (constant propagation) 
    • ConvTranspose (added auto_pad and output_shape attrs support)
    • Pow
    • Sigmoid 
    • Tanh 
    • Crop 
    • Reciprocal
    • Affine 

TensorFlow*

  • Added support of the following TensorFlow* topologies:
    • RFCN (from the TensorFlow* Object Detection API models zoo, version 1.9.0 or lower) 
    • YOLOv3 
    • OpenPose 
    • Convolutional Pose Machines
  • Added support of the following TensorFlow* operations:
    • Concat (only ConcatV2 was supported previously) 
    • Slice (now supported in all cases) 
    • CropAndResize (for case with ‘bilinear’ method) 
    • Minimum 
    • DepthToSpace (inference on CPU only due to 6D tensors)
  • Improved workflow for the TensorFlow* Objection Detection API models conversion. See documentation. 
    • Model Optimizer now accepts --input_shape command line parameter for this type of topologies, respects image resizer block type defined in the pipeline.config file and generates input layer dimensions based on the --input_shape provided.
    • SSD models are now generated reshape-able: the Model Optimizer generates a number of PiorBoxClustered nodes instead of Const node with priorboxes.
    • Fixes for producing correct model with non-square input layer size.
  • Fixed loading topology from the meta-graph file for the reported issues.
  • Added ability to load shared libraries with custom TensorFlow* operations to re-use shape infer function.

MXNet*

  • Added support of the following MXNet* operations:
    • Pad (for case when it is not fused into the Convolution) 
    • _minus_scalar 
    • _mul_scalar 
    • _contrib_Proposal 
    • ROIPooling

Other Changes

  • The default IR version is increased from 2 to 3. The IR of version 2 can be generated using the --generate_deprecated_IR_V2 command line parameter.
  • The [ and ] symbols in the --freeze_placeholder_with_value command line parameter are now handled properly.
  • If the graph becomes empty during transformations, error message is now reported.
  • Shape inference functions are fixed for a number of supported layer for all supported frameworks (not only Caffe*).
  • Meta information is added to the IR with information about command line parameters used and the Model Optimizer version.
  • Improved error messages and documentation.
  • IR generation now fails if the output shape for one of the nodes is non-positive and not integer.
  • Added support of a new pattern for the PReLU operation.
  • Added broadcasting (if necessary) of constant node in the Eltwise operation.

Inference Engine

Common Changes

Enabled the extensibility mechanism of the Shape Inference feature, which allows resizing network with custom layers after reading the model.

CPU Plugin

  • Added preview support for low-precision 8-bit integer inference on CPU platforms with support of Advanced Vector Extensions 512 (Intel® AVX-512) instructions. 
  • Introduced the calibration tool to convert IRs of Classification and Object Detection SSD models in FP32 format to calibrated IRs that can be executed in int8 mode. 
  • Platforms that do not have Intel® AVX-512 instructions or accelerator platforms, execute a calibrated IR in FP32, FP16 or FP11 data format depending of target platform optimizations. 
  • Updated Intel® MKL-DNN version to v0.16
  • Topology-specific performance optimizations:
    • Added support of Convolution with ScaleShift and PReLU layers fusing.
    • Improved performance of the Softmax layer for dense cases.
  • Bug fixes

GPU Plugin

  • Updated clDNN version to 9.1
  • Topology-specific performance optimizations
  • Bug fixes

FPGA Plugin

  • PRELU primitive support
  • New DLA 4.0 bitstreams with ELU and CLAMP layers support
  • Various bugfixes that improve accuracy of TensorFlow* and MXNet* ResNet topologies
  • Caffe* DenseNet topologies support
  • Error callback to inform about the reasons why a specific layer is not supported by the FPGA plugin
  • Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA board support

MYRIAD Plugin

  • Support of Intel® Neural Compute Stick 2
  • Improved batch support
  • Improved SSD-based networks support
  • Topology-specific performance optimizations
  • Stability improvements
  • Bug fixes

GNA Plugin

  • Improved Concat layer
  • Fixed issues with scaling in EltWise, ReLu and FullyConnected layers. 
  • Improved accuracy of LSTM networks with 8-bit quantization.
  • Added basic support for convolutional layers of TensorFlow* models. 

OpenCV*

  • Updated version to 4.0
  • Added ONNX* importer into dnn module; several image classification topologies, as well as YOLO object detection network are supported.
  • Added source level support for ARM* build (NEON optimized)
  • Improved AVX2 support via universal intrinsics
  • Added G-API module
    • OpenCV reference backend
    • Ported Color Copy Pipeline sample from OpenVX* to OpenCV*
  • Added QR code detector (objdetect module)

OpenVX*

Improvements for color copy pipeline use case. OpenCL plugin fixes and performance optimizations for printing and imaging scenarios. New pre-processing features in color copy pipeline sample (i.e. skew correction). Halftoning alternative to error diffusion.

Examples and Tutorials

Open Model Zoo

Extended the Open Model Zoo, which includes additional CNN pre-trained models and pre-generated Intermediate Representations (.xml + .bin): 

  • face-reidentification-retail-0071: Lightweight model based on Mobilenet V2 backbone for face re-identification. Replaces the previous version providing more accurate results.
  • landmarks-regression-retail-0009: Lightweight landmarks regression model for Smart Classroom scenario. Replaces the previous version providing more accurate results.
  • person-detection-action-recognition-0003: Person and action detection model for Smart Classroom scenario. Replaces the previous version providing more accurate results.
  • facial-landmarks-35-adas-0001: Convolutional neural network with custom architecture used for estimation of 35 facial landmarks.
  • human-pose-estimation-0001: Multi-person 2D pose estimation network. The model recognizes human pose: body skeleton, which consists of keypoints and connections between them.
  • single-image-super-resolution-0034: Single image super resolution network, enhances the resolution of the input image by a factor of 4.

Computer Vision Algorithms (CVA)

Vehicle/Pedestrian/Bicycle Detection for Crossroad scenario: fixed minor bugs.

Model Downloader

Model downloader configuration file is extended to support the following public models in Caffe* and TensorFlow* formats:

Accuracy Checker

Accuracy Checker is a console tool that allows you to infer deep learning models and collect cumulative accuracy metrics against datasets.

Key Features:

  • Full validation pipeline from model conversion to metric evaluation.
  • Popular task specific metrics: top-k values — for classification tasks, mean average precision, recall and miss rate — for object detection, mean intersection over union and pixel accuracy — for semantic segmentation, cumulative matching characteristics — for object re-identification, and others.
  • Configurable pre-processing of the input dataset. Image resizing, normalization, channels inversion, flipping, and other configuration options are available.
  • Configurable post-processing of inference results. Filtering, resizing bounding boxes, clipping boxes, non-maximum suppression and other options are available.

Samples

  • Updated samples to support OpenCV* 4.0
  • Improved architecture of samples and demos CMake projects:
    • The cpu_extension target is excluded from the samples folder and moved to the src folder of the package. The cpu_extension library is built automatically as a dependency during samples build.
    • CMake* binaries location is changed for Windows* (to avoid building in the Intel Distribution of OpenVINO toolkit installation folder).
    • Added support of the QUIET option for the Inference Engine CMake* package.
  • Added the hello_shape_infer_ssd sample to demonstrate the extensibility mechanism of the Shape Inference feature.
  • Added support for multiple video inputs in security_barrier_camera_demo to demonstrate the benefits of using FPGA in NVR cases.
  • Added new demo applications and tools:
    • human_pose_estimation_demo
    • object_detection_demo_yolov3_async
    • pedestrian_tracker_demo
    • super_resolution_demo
    • benchmark_app

New and Changed in the OpenVINO™ 2018 R3 Release

Model Optimizer

  • ONNX*
    • Further ONNX* models conversion support:
      • All models from the public list at https://github.com/onnx/models are now supported
      • Added support of opset8 (additionally to opset6 and opset7)
      • Added support of the following ONNX operators:
        • Constant
        • ConvTranspose
        • Div
        • Elu
        • Flatten
        • ImageScaler
        • InstanceNormalization
        • LeakyRelu
        • MatMul
        • Mul
        • Pad
        • Shape
        • Squeeze
        • Sub
        • Transpose
        • Unsqueeze
        • Upsample
  • TensorFlow*
    • Improved Semantic Segmentation support
      • Resize Bilinear operation is now supported and translated to the Interp layer of IR.
    • Improved workflow for object detection models conversion. See documentation.
      • Now the Model Optimizer reads model parameters from a configuration file that was used to train the model. The conversion approach used in previous OpenVINO releases is deprecated. 
      • Most of models from the Object Detection model zoo are supported. Faster R-CNNs and Mask R-CNNs are supported on CPU only and with batch size 1. 
      • Enabled conversion of object detection SSD models from the Object Detection model zoo with batch size that is bigger than 1
      • Fixed accuracy issues with SSD models from the Object Detection model zoo
    • Enabled support of YOLO* v1 and v2 from DarkNet*. See documentation.
    • FaceNet support
      • Introduced general support of freezing of the dynamic input (via command-line). The dynamic input of the FaceNet model is the training/inference flag (true is for training). See documentation. 
    • Model Optimizer now supports loading non-frozen models. See documentation.
    • summarize_graph script is available to dump the model input and output nodes. The script is located in the <INSTALL_DIR>/deployment_tools/model_optimizer/mo/utils directory

Inference Engine

  • Feature preview for Shape Inference. This feature allows you to change the model input size after reading the IR in the Inference Engine (without need to go back to the Model Optimizer).
  • Image pre-processing (Crop/Resize):
    • New API to mark input as resizable
    • New API to optimally crop a region of interest (ROI) 
    • Use of the image pre-processing API is demonstrated in the updated object_detection_demo_ssd, crossroad_camera_sample, security_barrier_camera_sample samples and the newly created hello_autoresize_classification sample.
  • Support of NHWC as an input layout for all plugins. Performance optimization will be available in next releases.  
  • CPU Plugin
    • Updated MKL-DNN version to 0.15.
  • FPGA Plugin
    • Updated to version 2.0.1 DLA PRQ, which implies new bitstreams
    • Multi-card support
    • HD image support
    • Deconvolution layer support in case when slicing of input blob is not needed
    • Hardware depthwise convolution support
    • Power and ScaleShift layers are optimized since now they are implemented on top of depthwise convolutions
    • Feature preview for support of binary/unary custom activation layers. Can be used without recompiling the Inference Engine FPGA plugin. Ask your Intel FAE for details.
    • MobileNet v1 support
    • Graph swapping mechanism and OpenVINO FPGA runtime are optimized
    • Special AOCL FPGA RTE (patched) that improves FPGA performance. Installation notes:
      • The RTE is delivered within the OpenVINO package only and installed by the package installer. 
      • Requires running `aocl install` after the OpenVINO core components installation.
      • After installation is completed, use the `aocl version` command to verify the version: it should be 17.1.1.273; otherwise, update your environment scripts.
      • For documentation, refer to the FPGA setup section in the in-package documentation. 
  • MYRIAD Plugin
    • Batch support
    • Added support for Interp, PSROIPooling and Proposal layers
    • Extended Avg/Max Pooling, CTCDecoder, Convolution layers to support CHW layout
    • Extended Pooling layer to support ExcludePad parameter
    • Updated error messages
    • Replaced Activation layer by CNNLayer layer of appropriate type
    • USB protocol stability fixes
    • Fixes for Permute, DetectionOutput, Eltwise::SUM layers
  • GNA Plugin
    • Hardware Performance counters API
    • speech_sample is updated with adding of output of performance counters and improved formatting of resulting output
    • Support of Slice layer
    • Aligned using of Activation layers in IRs with other plugins; Activation type is deprecated now; instead, appropriate types like Relu, Sigmoid (and other) are used
    • Fixed problem with incorrect reading of the GNA AOT (ahead of time) model from a file
    • Fixed padding calculation problem in Split layer

OpenCV*

  • Updated version to 3.4.3
  • Added initial Intel® Advanced Vector Extensions 2 support via universal intrinsics
  • Enabled GStreamer* backend on Linux*

OpenVX*

  • The implementation of vxConvolveNode detects symmetry within the convolution matrix input parameter, and switches to the more efficient implementation utilizing SymmetricalNxNFilter extension.
  • SymmetricalNxNFilter Intel extension implementation on Intel® GPU is improved. Now convolution matrix parameters update doesn't result in time consuming OpenCL™ kernel recompilation. 
  • New image padding Intel extension is supported on Intel® CPU and GPU hardware.
  • New color palettes conversion Intel extension is supported on Intel CPU. It includes 8 Bits Per Pixel (BPP) to 1 BPP pack kernel, 1 BPP to 8 BPP unpack kernel, 8 BPP to 2 BPP pack kernel, 2 BPP to 8 BPP unpack kernel, 8 BPP to 4 BPP pack kernel and 4 BPP to 8 BPP unpack kernel. 8 BPP <-> 2 BPP and 8 BPP <-> 4 BPP kernels are Beta (non-optimized) version.  
  • ITT instrumentation issues are fixed. Integration with Intel® VTune™ is improved. 

Examples and Tutorials

  • Extended Open Model Zoo, which includes additional CNN pre-trained models and pre-generated Intermediate Representations (.xml + .bin):
    • vehicle-license-plate-detection-barrier-0106: Multi-class (vehicle, license plates) detector. Replaces the previous version and runs faster  while maintaining the same accuracy.
    • person-detection-action-recognition-0001: Person and action detection model for Smart Classroom scenario.
    • person-reidentification-retail-0031: Ultra-small/fastest person re-identification model. Calculates person descriptor.
    • face-reidentification-retail-0001: Lightweight model for face re-identification.
    • landmarks-regression-retail-0001: Lightweight landmarks regression model for Smart Classroom scenario.
  • Computer Vision Algorithms (CVA) component now includes three more pre-built algorithms. All algorithms are capable of running on Intel CPU or GPU hardware. The additional algorithms are:
    • Emotions Recognition: Includes a CNN model pre-trained for emotions classification. The component's API allows running emotions classification on a batch of images.
    • Person Re-identification: Includes a CNN model pre-trained for person descriptor calculation. Component's API allows running the person descriptor calculation model on a batch of images.
    • Vehicle/Pedestrian/Bicycle Detection: Detects objects in a Crossroad scenario.
  • Model downloader configuration file is extended to support these public models in Caffe* and TensorFlow* formats:
  • Samples
    • Added eight channel Face Detection samples for FPGA including fine tuning of fp11-based use cases on related topologies.

New and Changed in the OpenVINO™ 2018 R2 Release

Inference Engine

  • CPU Plugin
    • Memory use optimizations.
    • Performance optimizations, including optimized split and concat implementations, added in-place optimization for multi-batch mode, support of convolution fusings with ReLU6, ELU, Sigmoid and Clamp layers.
    • Updated Intel® Math Kernel Library for Deep Neural Networks version to 0.14.
  • GPU Plugin
    • Added ability to change batch size dynamically for certain network topologies.
  • FPGA Plugin
    • Updated DLA to version 1.0.1 supporting Intel® Arria® 10 GX FPGA Development Kit and Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
    • Enabled FP11 bitstreams for object detection networks.
    • Updated software stack for Intel® Programmable Acceleration Card with Intel Arria® 10 GX FPGA to PAC 1.1.
    • Updated version of Intel® FPGA Runtime Environment for OpenCL™ Linux* to 17.1.2.
  • Myriad Plugin
    • Added Windows* 10 host system support.
    • Improved and fixed handling of multiple Intel® Myriad™ devices.
    • Added Intel® Myriad™ support to Hetero Plugin.
    • Improved performance of Deconvolution and Depthwise Convolution layers.
    • Added support for MVN and GRN layers and new optimized kernel for Global Pooling.
  • A preview version of GNA Plugin is introduced.

Model Optimizer

  • Support for selected Faster R-CNN models from the TensorFlow* Object Detection model zoo.
  • Feature Preview for ONNX* models conversion support.
  • Feature Preview for Kaldi* models conversion support.
  • Easier conversion of SSD models from the TensorFlow Object Detection model zoo.

OpenCV*

  • Updated version to 3.4.2.
  • Enabled Inference Engine as a backed for DNN.
  • Added Python* bindings for 2.7, 3.5 and 3.6 version.
  • Updated Intel® Integrated Performance Primitives version to 2018 Update 2.

OpenVX*

  • RgbToYCbCr Intel extension support on GPU.
  • RgbToLab Intel extension support on GPU.
  • The new symmetrical NxN filter Intel extension support on CPU and GPU. 3x3, 5x5, 7x7 and 9x9 filter apperture sizes are supported.
  • Symmetrical 7x7 filter Intel extension support on GPU. Will be replaced by NxN filter extension in the next releases.
  • LUT3D Intel extension enhancements. The user can now set a programmable number of lattice points in the range [2,33]. The user can now optionally pass in a custom lattice point mapping table for each of the 3 input channels. These are used to map pixel values [0,255] to the lattice point range [0,nlatticepoints-1], with floating point precision.
  • Rotate 90 Intel extension support on GPU. 0, 90, 180 and 270 angles of rotation are supported. 
  • Channel separate Intel extension support on GPU for RGB and RGBX input images. 
  • Warp affine with the new bicubic interpolation Intel extension on CPU and GPU. Catmull-Rom spline interpolation is utilized.
  • OpenVX context thread safety issue is resolved. Multiple OpenVX graphs can run in parallel now. Several other potential data race issues are resolved as well.

Examples and Tutorials

  • Added new CNN pre-trained models (prototxt) + pre-generated Intermediate Representations (.xml + .bin):
    • person-detection-retail-0013: Person Detection (faster than person-detection-retail-0001, replaces person-detection-retail-0012)
    • vehicle-attributes-recognition-barrier-0039: Vehicle attributes (type/color) recognition (replaces vehicle-attributes-recognition-barrier-0010)
    • person-vehicle-bike-detection-crossroad-0078: Multi-class (person, vehicle, non-vehicle) detector (replaces person-vehicle-bike-detection-crossroad-0066)
    • face-person-detection-retail-0002: Multi-class (faces + pedestrians) detector (retail use cases)
    • pedestrian-detection-adas-0002: Pedestrian detector (ADAS scenario)
    • vehicle-detection-adas-0002: Vehicle detector (ADAS scenario)
    • pedestrian-and-vehicle-detector-adas-0001: Multi-class (pedestrians + vehicles) detector (ADAS scenario)
    • person-attributes-recognition-crossroad-0031: Person attributes classification for a traffic analysis scenario
    • emotions-recognition-retail-0003: Emotions (neutral/happy/sad/surprise/anger) classification (retail use cases)
    • person-reidentification-retail-0076: Person re-identification model for general scenario (more precise than person-reidentification-retail-0079)
    • person-reidentification-retail-0079: Person re-identification model for general scenario (faster than person-reidentification-retail-0076)
    • road-segmentation-adas-0001: Segmentation network to classify each pixel into 4 classes (BG, road, curb, mark) for ADAS use cases
    • semantic-segmentation-adas-0001: Segmentation network to classify each pixel into 20 classes (ADAS scenario)
  • New Computer Vision Algorithms (CVA) component includes three pre-built algorithms:
    • Face Detector is a component that includes CNN model pre-trained for Face Detection. The goal of FD is to detect faces of the people who are present in the camera field of view and are looking to the camera. Face Detector can run on CPU, Intel® Integrated Graphics and Intel® Movidius™ Neural Compute Stick.
    • Age/Gender Recognition (Face Analyzer) is a deep-learning based software component that provides face analysis algorithms, i.e. accurate recognition of people age and gender. Age/Gender Recognition can run on CPU, Intel® Integrated Graphics and Intel® Movidius™ Neural Compute Stick.
    • Camera Tampering Detection is a component that is intended to recognize malicious effects on a camera. It detects camera tampering events such as occlusion, de-focus or displacement using classical computer vision approach. Camera Tampering Detection can run on CPU.
  • The model downloader configuration file has been extended to support the following public models in Caffe* format:

 

New and Changed in the OpenVINO™ 2018 R1.2 Release

Inference Engine

  • Preview Features:
    • Added Python* API support.
    • Added six samples to demonstrate Python* API usage.
    • Added two samples to demonstrate Python* API interoperability use-case between AWS Greengrass* and the Inference Engine. Details are in the AWS Greengrass* Samples Overview

 

New and Changed in the OpenVINO™ 2018 R1.1 Release

Model Optimizer

The Model Optimizer component has been replaced by a Python*-based application, with a consistent design across the supported frameworks. Key features are listed below. See the Model Optimizer Developer Guide for more information.

  • General changes:
    • Several CLI options have been deprecated since the last release. 
    • More optimization techniques were added.
    • Usability, stability, and diagnostics capabilities were improved.
    • Microsoft* Windows* 10 support was added.
    • A total of more than 100 public models are now supported for Caffe*, MXNet*, and TensorFlow* frameworks. 
    • A framework is required for unsupported layers, and a fallback to the original framework is available for unsupported layers.
  • Caffe* changes:
    • The workflow was simplified, and you are no longer required to install Caffe.
    • Caffe is no longer required to generate the Intermediate Representation for models that consist of standard layers and/or user-provided custom layers. User-provided custom layers must be properly registered for the Model Optimizer and the Inference Engine. 
    • Caffe is now only required for unsupported layers that are not registered as extensions in the Model Optimizer.
  • TensorFlow* support is significantly improved, and now offers a preview of the Object Detection API support for SSD*-based topologies.

Inference Engine

  • Added Heterogeneity support:
    • Device affinities via API are now available for fine-grained, per-layer control.
    • You can now specify a CPU fallback for layers that the FPGA does not support. For example, you can specify HETERO: FPGA, CPU as a device option for Inference Engine samples.
    • You can use the fallback for CPU + Intel® Integrated Graphics if you have custom layers implemented only on the CPU, and you want to execute the rest of the topology on the Intel® Integrated Graphics without rewriting the custom layer for the Intel® Integrated Graphics.
  • Asynchronous execution: The Asynchronous API improves the overall application frame rate, allowing you to perform secondary tasks, like next frame decoding, while the accelerator is busy with current frame inference.
  • New customization features include easy-to-create Inference Engine operations. You can:
    • Express the new operation as a composition of existing Inference Engine operations or register the operation in the Model Optimizer.
    • Connect the operation to the new Inference Engine layer in C++ or OpenCL™. The existing layers are reorganized to “core” (general primitives) and “extensions” (topology-specific, such as DetectionOutput for SSD). These extensions now come as source code that you must build and load into your application. After the Inference Engine samples are compiled, this library is built automatically, and every sample explicitly loads the library upon execution. The extensions are also required for the pre-trained models inference.
  • Plugin support added for the Intel® Movidius™ Neural Compute Stick hardware (Myriad2).
  • Samples are provided for an increased understanding of the Inference Engine, APIs, and features:
    • All samples automatically support heterogeneous execution.
    • Async API showcase in Object Detection via the SSD sample.
    • Minimalistic Hello, classification sample to demonstrate Inference Engine API usage.

OpenCV*

  • Updated to version 3.4.1 with minor patches. See the change log for details. Notable changes:
    • Implementation of on-disk caching of precompiled OpenCL kernels. This feature reduces initialization time for applications that use several kernels.
    • Improved C++ 11 compatibility on source and binary levels.
  • Added subset of OpenCV samples from the community version to showcase the toolkit capabilities:
    • bgfg_segm.cpp - background segmentation
    • colorization.cpp - performs image colorization using DNN module (download the network from a third-party site)
    • dense_optical_flow.cpp - dense optical flow using T-API (Farneback, TVL1)
    • opencl_custom_kernel.cpp - running custom OpenCL™ kernel via T-API
    • opencv_version.cpp - the simplest OpenCV* application - prints library version and build configuration
    • peopledetect.cpp - pedestrian detector using built-in HOGDescriptor

OpenVX*

  • A new memory management scheme with the Imaging and Analytics Pipeline (IAP) framework drastically reduces memory consumption.
    • Introduces intermediate image buffers that result in a significant memory footprint reduction for complex Printing and Imaging (PI) pipelines operating with extremely large images.
    • Deprecated tile pool memory consumption reduction feature. Removed from the Copy Pipeline sample.
  • The OpenVX* CNN path is not recommended for CNN-based applications and is partially deprecated:
    • CNN AlexNet* sample is removed.
    • CNN Custom Layer (FCN8) and Custom Layers library are removed.
    • The OpenVX* SSD-based Object Detection web article is removed.
    • OpenVX* FPGA plugin is deprecated. This is part of the CNN OVX deprecation.
  • The VAD tool for creating OpenVX* applications is deprecated and removed.
  • New recommendation: Use Deep Learning Inference Engine capabilities for CNN-based applications.

Examples and Tutorials

  • Model downloader for the OpenVINO™ toolkit public models in Caffe format:
  • Cross-check tool: To debug the model inference both in whole and layer-by-layer, comparing accuracy and performance between CPU, Intel® Integrated Graphics, and the Intel® Movidius™ Neural Compute Stick.
  • CNN pre-trained models (prototxt) + pre-generated Intermediate Representations (.xml + .bin):
    • age-gender-recognition-retail: Age and gender classification.
    • face-detection-retail: Face Detection.
    • person-detection-retail: Person detection.
    • license-plate-recognition-barrier: Chinese license plate recognition.
    • face-detection-adas: Face Detection.
    • person-detection-retail: Person Detection.
    • head-pose-estimation-adas: Head and yaw + pitch + roll.
    • vehicle-attributes-recognition-barrier: Vehicle attributes (type/color) recognition.
    • person-vehicle-bike-detection-crossroad: Multiclass (person, vehicle, non-vehicle) detector.
    • vehicle-license-plate-detection-barrier: Multiclass (vehicle, license plates) detector.

 

Preview Features Terminology

A preview feature is functionality that is being introduced to gain early developer feedback. Comments, questions, and suggestions related to preview features are encouraged and should be submitted to the forum.

The key properties of a preview feature are:

  •     It is intended to have a high quality implementation
  •     There is no guarantee of future existence or compatibility.

NOTE: A preview feature is subject to change in the future. It may be removed or altered in future releases. Changes to a preview feature do NOT require the a deprecation and deletion process. Using a preview feature in a production code base is discouraged.

 

Known Issues

ID Description Component Workaround
1 Releasing a non-virtual vx_array object after it has been used as a parameter in a graph and before graph execution may result in slow vxProcessGraph and data corruption. OpenVX* N/A
2 When a graph is abandoned due to a user node failure, the callbacks that are attached to skipped nodes are called. OpenVX N/A
3 The OpenVX* volatile kernels extensions API are subject to change. OpenVX N/A
4 Multiple user node access the same array cause application crash. OpenVX N/A
5 Intel® Integrated Graphics equalize histogram node partially runs on CPU. OpenVX N/A
6 User node hangs when calling Intel® Intetegated Performance Primitives if the node is linked to IAP.so OpenVX N/A
7 Edge Tracing part of IPU Canny Edge detection runs on CPU. OpenVX N/A
8 The Harris Corners* Kernel Extension produces inaccurate results when the sensitivity parameter is set outside the range of [0.04; 0.15] OpenVX N/A
9 The API vxQueryNode() returns zero for custom Intel® Integrated Graphics nodes when queried for the attribute VX_NODE_ATTRIBUTE_PERFORMANCE. OpenVX N/A
10 Node creation methods do not allow using the NULL pointer for non-optional parameters. OpenVX N/A
11 The vx_delay object doesn’t the support vx_tensor and vx_object_array types OpenVX N/A
12 The vx_delay object is not supported as a user node input parameter OpenVX N/A
13 Scalar arguments are not changing dynamically in several nodes in ColorConvert node on Intel®Integrated Graphics in the Runtime OpenVX N/A
14 The OpenCL™ out of order queue feature might slow down a single nodes graph OpenVX N/A
15 On CPU in vxConvolutionLayer rounding_police parameter ignored, TO_ZERO rounding is used in any case OpenVX N/A
16 On CPU in vxFullyConnecedLayer rounding_police parameter ignored, TO_ZERO rounding is used in any case OpenVX N/A
17 On CPU in vxTensorMultiplyNode rounding_policy parameter ignored, TO_ZERO policy is used in any case OpenVX N/A
18 Unsupported Dynamic Shapes for Caffe* layers Model Optimizer N/A
19 Some TensorFlow operations are not supported, but only a limited set of different operations can be successfully converted. Model Optimizer Enable unsupported ops through Model Optimizer  extensions and IE custom layers
20 Only TensorFlow models with FP32 Placeholders. If there is non FP32 Placeholder, the next immediate operation after this Placeholder should be a Cast operation that converts to FP32. Model Optimizer Rebuild your model to include a FP32 placeholder only or add cast operations
21 Only TensorFlow models with FP32 weights are supported. Model Optimizer Rebuild your model to have FP32 weights only
23 Embedded preprocessing in Caffe models is not supported and is ignored. Model Optimizer Pass preprocessing parameters through MO CLI parameters
24 Releasing the the plugin's pointer before inference completion might cause a crash. Inference Engine Release the plugin pointer at the end of the application, when inference is done.
25 FP11 bitstreams can be programmed to the boards using the flash approach only. Inference Engine Use the instructions in the FPGA installation guide
26 If Intel OpenMP was initialized before OpenCL, OpenCL will hang. This means initialization or executing the FPGA will hang, too. Inference Engine Initialize FPGA or Heterogeneous with the FPGA plugin priority before the CPU plugin.
27 The performance of the first iteration of the samples for networks executing on FPGA is much lower than the performance of the next iterations. Inference Engine Use the -ni <number> -pc to tet the real performance of inference on FPGA.
28 To select the best bitstream for a custom network, evaluate all available bitstreams and choose the bitstream with the best performance and accuracy. Use validation_app to collect accuracy and performance data for the validation dataset. Inference Engine  
29 The setBatch method works only for topology which has batch as first dimension for all tensors Inference Engine  
30 Multiple OpenMP runtime initialization is possible if you are using MKL and Inference Engine simultaneously Inference Engine Use apreloaded iomp5 dynamic library
  Resize feature works only on machines which support SSE4.2 instruction set Inference Engine N/A
31 Completion Callback is called in case of succesfull execution of infer request only Inference Engine Use Wait to get notfied about errors in infer request
32 GPU plugin does not deallocate memory used during graph compilation Inference Engine  
33 While loading extension modules, the Model Optimizer reports an "No module named 'extensions.<module_name>'" internal error and does not load any extensions from the specified directory. It happens only if you use the --extensions command line option with a directory with the base name extensions but that is not the <INSTALL_DIR>/deployment_tools/model_optimizer/extensions directory. Model Optimizer Use different base name for the directory with custom extensions.
35

If you have the Intel® Media Server Studio installed on your CentOS* 7.4 machine, the installation of the OpenCV dependencies may cause a libva.so version conflict.

Installation

Remove libva and reinstall it manually from the Intel Media Server Studio RPM package:

# yum remove libva
# yum install ffmpeg-libs
# rpm -ivh rpm -ivh /path/to/mss/distributive/libva-*.rpm
36

Model Optimizer supports RFCN Models from the TensorFlow* Object Detection API version 1.9.0 or lower

Model Optimizer

Freeze the model using the TensorFlow* and TensorFlow* Object Detection API version 1.9.0.

37

Intel® Movidius™ Neural Compute Stick Neural Compute Stick shows significant performance degradation for batch case (3 and more)

Inference Engine N/A
38

MYRIAD plugin does not support the Caffe* version of Inception-Resnet-v2. It fails with the following error message:
[VPU] Internal error : inception_resnet_v2_a1_residual_eltwise doesn't support coeff values different from 1."

Inference Engine N/A
39

Low-precision 8-bit integer inference on CPU might have problems with calibration and final big accuracy drop for a calibrated ResNet* model if the model is converted with default options of the Model Optimizer.

Inference Engine Use --disable_resnet_optimization for ResNet models which are going to be calibrated and executed in low-precision 8-bit integer inference on CPU
 

Shape Inference for Reshape layer might not work correctly for TensorFlow models if its shape and parameters are dynamically depend on other layers (for example, for the pre-trainned vehicle-license-plate-detection-barrier-0107 model).

Inference Engine Generate IR using MO with --input_shape option
  Models with fixed dimensions in the `dim` attribute of the Reshape layer can't be resized. Inference Engine Generate IR using MO with --input_shape option
 

Shape inference for Interp layer works for almost all cases, except for Caffe models with fixed width and height parameters (for example, semantic-segmentation-adas-0001).

Inference Engine Generate IR using MO with --input_shape option
 

Keyboard layout issue running GUI installer from vnc client (user can experience symbols mismatch by typing text in installer GUI due to QT and VNC compatibility issue)

Installer Use the CLI installer version by running ./install.sh from the package directory instead of the GUI installer (./install_GUI.sh) 
  Media SDK samples build failure due to issue with the environment. Media SDK

Run the following command:

export PKG_CONFIG_PATH=/opt/intel/mediasdk/lib64/pkgconfig:$PKG_CONFIG_PATH

 

  Computer Vision Algorithms (CVA) component does not work on Windows* OS because of the environment problem. Computer Vision Algorithms Please, download fixed CVA component.

Included in This Release

The Intel® Distribution of OpenVINO™ toolkit is available in three versions:

  • Intel® Distribution of OpenVINO™ toolkit for Windows*
  • Intel® Distribution of OpenVINO™ toolkit for Linux*
  • Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA Support
Install Location/File Name Description
Deep Learning Model Optimizer Model optimization tool for your trained models
Deep Learning Inference Engine Unified API to integrate the inference with application logic
OpenCV* OpenCV Community version compiled for Intel hardware. Includes PVL libraries for computer vision
Intel® Media SDK libraries (open source version) Eases the integration between the OpenVINO™ toolkit and the Intel® Media SDK.
Intel OpenVX* runtime Intel's implementation of the OpenVX* runtime optimized for running on Intel® hardware (CPU, GPU, IPU)
Intel® Graphics Compute Runtime for OpenCL™  Enables OpenCL™ on the GPU/CPU for Intel® processors
Intel® FPGA Deep Learning Acceleration Suite, including pre-compiled bitstreams Implementations of the most common CNN topologies to enable image classification and ease the adoption of FPGAs for AI developers. Includes pre-compiled bitstream samples for the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA and the Arria® 10 GX FPGA Development Kit.
Intel® FPGA SDK for OpenCL™ software technology The Intel® FPGA RTE for OpenCL™ provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files
Intel® Distribution of OpenVINO™ toolkit documentation Developer guides and other documentation. Available from the Intel® Distribution of OpenVINO™ toolkit product site
Open Model Zoo A set of pre-trained models for prototxt and generated Intermediate Representation Files. You can use these for demonstrations, to help you learn the product, or for product development.
Computer Vision Samples Samples that illustrate use of or application computer vision application creation for the Inference Engine, OpenCV*, and OpenVX*.
Computer Vision Algorithms (CVA) Highly Optimized Computer Vision Algorithms

 

Where to Download This Release

Get It Now

 

 

 

System Requirements

Development Platform

Hardware

  • 6th-8th Generation Intel® Core™
  • Intel® Xeon® v5 family
  • Intel® Xeon® v6 family

Operating Systems

  • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
  • CentOS* 7.4, 64-bit
  • Windows* 10, 64-bit

Target Platform (choose one processor with one corresponding operating system)

Your requirements may vary, depending on which product version you use.

Intel® CPU processors with corresponding operating systems

  • 6th-8th Generation Intel® Core™ and Intel® Xeon® processor with operating system options:
    • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit
    • Windows* 10, 64-bit
  • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
    • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
    • Yocto Project* Poky Jethro* v2.0.3, 64-bit

Intel® Integrated Graphics processors with corresponding operating systems

NOTE: This installation requires drivers that are not included in the Intel® Distribution of OpenVINO™ toolkit package

  • 6th - 8th Generation Intel® Core™ processor with Intel® Iris® Pro graphics and Intel® HD Graphics
    • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit
  • 6th - 8th Generation Intel® Xeon® processor with Intel® Iris® Pro graphics and Intel® HD Graphics

    NOTE: A chipset that supports processor graphics is required for Intel® Xeon® processors. Processor graphics are not included in all processors. See https://ark.intel.com/ for information about your processor.

    • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit
  • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
    • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
    • Yocto Project* Poky Jethro* v2.0.3, 64-bit

Intel® FPGA processors with corresponding operating systems

NOTES:
Only for the Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA Support
OpenCV* and OpenVX functions must be run against the CPU or Intel® Integrated Graphics to get all required drivers and tools

  • Intel® Arria® FPGA 10 GX development kit and Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
    • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
    • CentOS* 7.4, 64-bit

Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2, and  Intel® Vision Accelerator Design with Intel® Movidius™ VPUs with corresponding operating systems

  • Ubuntu* 16.04.3 long-term support (LTS), 64-bit
  • CentOS* 7.4, 64-bit
  • Windows* 10, 64-bit

Helpful Links

Note: Links open in a new window.

OpenVINO™ toolkit Home Page

 

Legal Information

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http://www.intel.com/ or from the OEM or retailer.

No computer system can be absolutely secure.

Intel, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos

*Other names and brands may be claimed as the property of others.

Copyright © 2018, Intel Corporation. All rights reserved.