Intel® FPGA AI Suite: Getting Started Guide

ID 768970
Date 2/02/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

6.12.1. Preparing a YOLOv3 Model

As stated in Preparing a Model, a model must be converted from a framework (such as TensorFlow, Caffe, or Pytorch) into a pair of .bin and .xml files before the Intel® FPGA AI Suite compiler (dla_compiler command) can ingest the model.

The following commands download the YOLOv3 TensorFlow model and run Model Optimizer:
source ~/build-openvino-dev/openvino_env/bin/activate
omz_downloader --name yolo-v3-tf \ 
    --output_dir $COREDLA_WORK/demo/models/ 
omz_converter --name yolo-v3-tf \
    --download_dir $COREDLA_WORK/demo/models/ \
    --output_dir $COREDLA_WORK/demo/models/

These commands create .bin and .xml files in the demo/models/public/yolo-v3-tf/FP32/ directory.