Steps to convert custom SSD MobileNet V2 model to IR using Model Optimizer
Converted pre-trained SSD MobileNetV2 model to IR but not able to convert a custom-trained model.
- Exported frozen model graph:
python object_detection/export_inference_graph.py \ --input_type=image_tensor \ --pipeline_config_path={PIPELINE_CONFIG_PATH} \ --output_directory="exported_model" \ --trained_checkpoint_prefix="/content/models/research/helmet_detector/model.ckpt-10000"
- Attempted to convert frozen model graph to IR using Model Optimizer:
python mo_tf.py \ --input_model ./exported_model/frozen_inference_graph.pb \ --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json \ --tensorflow_object_detection_api_pipeline_config ./helment_detector_tf1.config \ --input_shape [1,300,300,3] \ --reverse_input_channels \ --output_dir output_ncs \ --data_type FP16
- Encountered error:
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (): Unexpected exception happened during extracting attributes for node StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/map/while. Original exception message: '^Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/SortByField/Assert/Assert'
In the Model Optimizer conversion command, use the appropriate custom operations config file for SSD MobileNetV2 models generated with TensorFlow* 1 models: ssd_support_api_v1.15.json. Refer to the following page for conversion instructions: How to Convert a Model
python3 ./openvino/model-optimizer/mo_tf.py --input_model ./detector/exported_model/frozen_inference_graph.pb --tensorflow_use_custom_operations_config ./openvino/model-optimizer/extensions/front/tf/ssd_support_api_v1.15.json --tensorflow_object_detection_api_pipeline_config ./detector/helment_detector_tf1.config --input_shape [1,300,300,3] --reverse_input_channels --data_type FP16