How to port your application from Intel® Computer Vision SDK 2017 R3 Beta to OpenVINO™ Toolkit.

ID 673077
Updated 5/14/2018
Version Latest
Public

author-image

By

Open Visual Inference & Neural network Optimization (OpenVINO™) toolkit (former Intel® Computer Vision SDK) - a set of tools and libraries which allow developers to accelerate their computer vision algorithms on different Intel’s hardware units (CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Movidius™ Myriad™ 2) - is released! OpenVINO™ toolkit has a lot of changes comparing with the Intel® Computer Vision SDK. If you are already familiar with Intel® Computer Vision SDK 2017 Beta and using it in your projects, this blog will help you to port your application to the OpenVINO™ toolkit 2018 R1.1.

OpenVINO™ toolkit has four main components: two tools for Deep Learning Inference (Model Optimizer and Inference Engine) and two for traditional computer vision approach (OpenCV* binaries and runtime for OpenVX*). Changes mostly apply to the Model Optimizer and the Inference Engine. Let’s go through each of them to look at major differences between the latest and previous releases. Please find full list of “What’s new” in the OpenVINO™ toolkit 2018 R1.1 release described at the Release Notes.

Model Optimizer

Model Optimizer tool converts a trained model from a framework-specific format to the Intermediate Representation (IR) - a special format used by the Inference Engine. Model Optimizer is completely redesigned to become a user-friendly Python* application that supports three frameworks: Caffe*, Tensorflow*, and MXNet*. Moreover, now you can convert models on Windows* OS.

The installation and configuration have become much easier: previously, you had to download and build Caffe sources with special “adapters” from Intel Computer Vision SDK/Deep Learning Deployment Toolkit package, but now, these steps are no longer required. Everything you need to use the Model Optimizer for all frameworks is to install 64-bit Python 3.4 or higher and run the following script:

cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/install_prerequisites/
sudo install_prerequisites.sh

On Linux* OS, the script installs requirements to the isolated Python* environments using virtual environment. Do not forget to source it when using the Model Optimizer:

source ./venv/bin/activate 

The command line parameters has also been changed from Intel® Computer Vision SDK 2017 R3 to OpenVINO™ toolkit 2018 R1.1. Let’s compare old and new command lines converting an AlexNet* network model trained with Caffe* to Intermediate Representation (IR).

Old Model Optimizer:

./ModelOptimizer -i  -w  <path_to_caffemodel>/alexnet.caffemodel –d <path_to_prototxt>/alexnet.prototxt -f 1  --target XEON

New Model Optimizer has four Model Optimizer scripts inside the package: mo.py, mo_caffe.py, mo_tf.py, and mo_mxnet.py:  

mo.py works with all frameworks, but if you work with only one of them, you can choose mo_tf.py, mo_caffe.py, or mo_mxnet.py scripts.

So, to convert the AlexNet* model, you can use one of these two variants:

sudo python3 mo.py --framework caffe --input_model <path_to_input_model>/alexnet.caffemodel

or

sudo python3 mo_caffe.py --input_model <path_to_input_model>/alexnet.caffemodel

The new OpenVINO™ toolkit package contains a special translator that converts old Model Optimizer command line to the new one and executes it:

Examples of how to use the new Model Optimizer:

  • For TensorFlow:
    sudo python3 mo.py --framework  --input_model <path_to_input_model>/vgg_16.pb -b 1
    

    or

    sudo python3 mo_tf.py --input_model <path_to_input_model>/vgg_16.pb -b 1
  • For MXNet:
    sudo python3 mo.py --framework mxnet --input_model <path_to_input_model>/nst_vgg19-0000.params --input_shape [1,3,224,224]

    or

    sudo python3 mo_mxnet.py --input_model <path_to_input_model>/nst_vgg19-0000.params --input_shape [1,3,224,224]

More detailed explanation of Model Optimizer usage is described in the Model Optimizer Guide.

Inference Engine

Inference Engine is an API for model (Intermediate Representation) integration and inference acceleration around Intel® hardware units (CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Movidius™ Myriad™ 2). 

Inference Engine library from OpenVINO™ toolkit 2018 R1.1 is not fully compatible with Intel® Computer Vision SDK 2017 R3. Backward compatibility is declared starting OpenVINO™ toolkit 2018 R1.1.

The biggest API changes are:

  1. Some headers cnn_netreader.cpp and ie_plugin_cpp.hpp were moved to cpp/ folder.
    • Old Inference Engine:
    #include <ie_nn_net_reader.h>
    #include <ie_plugin_cpp.hpp>
    • New Inference Engine: 
    #include <cpp/ie_nn_net_reader.h>
    #include <cpp/ie_plugin_cpp.hpp>
  2. Precisions (FP32, FP16, etc..) were moved to Precision class from global scope.
    • Old Inference Engine:
    item.second->setPrecision(FP32);
    • New Inference Engine
    item.second->setPrecision(Precision::FP32);

    Constructions like PrecisionName(precision) function for converting precision to string should be renamed to precision.name().

  3. Extension mechanism has also been changed. At the samples folder, you can find the library with "standard extensions" for CPU - layers that are not included in the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) plugin like ArgMax, Resample, PriorBox (check full standard extensions list here).  To use these layers, please include ext_list.hpp and call the extension from the list:

     

    #include "ext_list.hpp"
    
    .......................
    .......................
    
    if ((deviceName.find("CPU") != std::string::npos)) {
                    plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());

If you want to add your own custom layers, please take a look at these guides:

OpenCV* and OpenVX* libraries are backward compatible.