Unable to use the POT to optimize a Tensorflow (TF) or MXNet model for inference with the OpenVINO™ toolkit on an Intel Atom® platform.
Choose one of two options:
$ python3 -m pip install virtualenv
$ python3 -m virtualenv -p `which python3` <directory_for_environment>
Refer to following links to build from source:
Similarly to MXNet, TensorFlow (TF) from pypi is shipped with AVX starting from version 1.6. Intel Atom® E3950 Processor supports SSE instructions and does not support AVX. Therefore, importing TF or MXNet models will cause an illegal instruction error when POT is run in devices without AVX support.
POT itself does not directly depend on either TF or MXNet. POT depends on Model Optimizer and Accuracy Checker, which may depend on TF or MXNet. To minimize this situation, OpenVINO™ toolkit 2021.1 limits the import of TensorFlow to cases where this library is really used, such as when evaluating a model using TF as backend. MXNet is also troublesome for the same scenario, so it is possible to do the same for it.
However, SSE systems, like Intel Atom® platforms, are not used for calibration purposes. It is not recommended to use Intel Atom® platforms for POT quantization.