The Open Neural Network Exchange (ONNX) is a format for deep learning models. This tutorial explores the use of ONNX in version R4 of the Intel® Distribution of OpenVINO™ toolkit. It converts the SqueezeNet ONNX model into the two Intermediate Representation (IR) .bin and .xml files. It also demonstrates the use of the IR files in the image classification user application to predict the input image. SqueezeNet is a lightweight Convolutional Neural Network (CNN) for image classification that takes an image as input and classifies the major objects in the image into pre-defined categories.
The tutorial includes instructions for building and running the classification sample application on the UP Squared* Grove* IoT Development Kit and the IEI Tank* AIoT Developer Kit. The UP Squared* board and the IEI Tank platforms come preinstalled with an Ubuntu* 16.04.04 Desktop image and the Intel® Distribution of OpenVINO™ toolkit.
Figure 1 illustrates the relationship the ONNX model has to the Model Optimizer and the Inference Engine and how it results in output image prediction.
Figure 1. SqueezeNet image classification flow
Hardware
The hardware components used in this project are listed below:
- UP Squared* Grove* IoT Development Kit or IEI Tank* AIOT Development Kit
- A monitor with an HDMI interface and cable
- USB keyboard and mouse
- A network connection with Internet access or Wi-Fi kit for the UP Squared board.
Software Requirements
The software used in this project is listed below:
- An Ubuntu 16.04.4 Desktop image and the Intel® Distribution of OpenVINO™ toolkit come pre-installed R1 version 1.265 in the UP Squared* and the IEI Tank.
- Intel® Distribution of OpenVINO™ toolkit (pre-installed on UP Squared* and IEI Tank). This tutorial uses the R4 version 4.320. Download the Intel® Distribution of OpenVINO™ Toolkit R4 and follow the installation instructions for Linux*. The installation creates the directory structure in Table 1.
Table 1. Directories and Key Files in the Intel® Distribution of the OpenVINO™ Toolkit R4
Component | Location |
Root directory | /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools |
Model Optimizer | /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer |
install_prerequisites_onnx.sh | /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/install_prerequisites |
Build | /home/upsquared/inference_engine_samples |
Binary | /home/upsquared/inference_engine_samples/intel64/Release |
Demo scripts | /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/demo |
Classification Sample | /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/inference_engine/samples/classification_sample |
Download SqueezeNet ONNX Models
Download the SqueezeNet ONNX model version 1.3 and place it in ~/openvino_models/squeezenet1.3 of your UP Squared board or the IEI Tank.
Convert ONNX Model to IR
- Set environment variables.
- Configure the Model Optimizer for ONNX.
- Convert the ONNX SqueezeNet model to optimized IR files using the Model Optimizer. squeezenet.xml and squeezenet.bin will be generated at ~/openvino_models/squeezenet1.3.
Build the Image Classification Application
- Update the repository list and install prerequisite packages.
- Set environment variables.
- If the build directory exists, go to the build directory. Otherwise, create the build directory.
- Generate makefile for release, without debug information. Or generate makefile with debug information.
- Build the image classification sample.
- Build all samples.
- The build generates the classification_sample executable in the Release or Debug directory. If you make changes in the classification sample and want to rebuild it, be sure delete the classification_sample first. The demo_squeezenet_download_convert_run.sh script doesn’t rebuild if the binary exists.
Run the Classification Application
Run the application with -h to display all available options.
The classification application uses below options:
- Path to an .xml file with a trained SqueezeNet classification model which generated above.
- Path to an image file
The classification application assumes the labels file name is the same as the IR files but with different extensions. Copy the existing SqueezeNet labels file from $ROOT_DIR/demo/squeezenet1.1.labels to the squeezenet.1.3.labels.
Run the classification application:
Interpret Detection Results
The application outputs the top-10 inference results. To get the top 15 inference results, add option “-nt 15” into the command line.
The number shown on the first column is the line number plus one in the labels file. For the output example below, 817 will be the line number 818 in the labels file. “sports car, sport car” is the corresponding words at line 818 in the labels file which are detection objects recognized by the deep learning model.
Run Demo Script
The classification and security barrier camera demo scripts are located in /opt/intel/computer_vision_sdk_2018.4.420/deployment_tools/demo. Refer to README.txt for instructions detailing how to use the demo scripts. Use the above command lines and update the existing demo scripts to create your own classification demo script.
Troubleshooting
If you encounter the error below, make sure the environment variables are correct by entering source $ROOT_DIR/../bin/setupvars.sh on the terminal window that executes the cmake command to set environment variables.
Summary
This tutorial describes how to convert a deep learning ONNX model to optimized IR files and then how to use the IR files in the classification application to predict the input image. Feel free to try the other deep learning ONNX models on the UP Squared* board and the IEI Tank.
Key References
- Intel® Developer Zone
- UP Squared* board
- IEI Tank* AIoT Developer Kit
- Intel® Distribution of OpenVINO™ toolkit
- Intel® Distribution of OpenVINO™ toolkit Release Notes
- Intel® Distribution of OpenVINO™ toolkit Forum
- Intel® Distribution of OpenVINO™ toolkit using ONNX
- Model Optimizer
About the Author
Nancy Le is a software engineer at Intel Corporation in the Core & Visual Computing Group working on Intel Atom® processor enabling for Intel® Internet of Thing or IoT projects.