Article ID: 000087323 Content Type: Maintenance & Performance Last Reviewed: 02/01/2023

Inference Results Are Different between CPU Plugin and MYRIAD Plugin with the Same Intermediate Representation (IR) Model

Environment

Intel Neural Compute Stick 2

BUILT IN - ARTICLE INTRO SECOND COMPONENT
Summary

Methods to improve the inference results generated from MYRIAD plugin

Description
  1. Modified Object Detection SSD Python* Sample by extracting the output of conv2/WithoutBiases layer
  2. Executed the demo with the same input image and mobilenet-ssd model on CPU plugin and MYRIAD plugin.
  3. Compared the two output images generated from CPU plugin and MYRIAD plugin using Beyond Compare.
  4. There were many differences (denoted by red dots) between the two output images.
Resolution

It is expected to have an accuracy difference between any of the target platforms but the difference from the reference metrics should be within 1%.

Choose one of two methods to improve the inference results generated from MYRIAD plugin:

Method 1:

  • Disable MYRIAD hardware acceleration in the source code.

    ie = IECore()
    ie.set_config({'MYRIAD_ENABLE_HW_ACCELERATION': 'NO'}, "MYRIAD")
    net = ie.read_network(model=model_xml, weights=model_bin)
    exec_net = ie.load_network(network=net, device_name="MYRIAD")

Method 2:

  • Re-generate IR model using the Model Optimizer by specifying the scale value. The scale value should be up to 255.

    python mo.py --input_model <model> --scale <value>

Additional information

Refer to How to Compare in the Picture Compare View using Beyond Compare.

Refer to Differences Between Two Images to detect and visualize differences between two images.

Related Products

This article applies to 1 products