Optimization of neural network models for execution on Inference Engine.
ONNX* model can be converted into IR format using Model Optimizer tool. Unable to validate if there is a way to convert from IR format back to ONNX* file format.
The OpenVINO™ workflow does not support converting from IR format back to ONNX*, or other, file format. Model Optimizer loads a model into memory, reads it, builds the internal representation of the model, optimizes it, and produces the IR format, which is the only format that the Inference Engine accepts.