Visible to Intel only — GUID: bim1661605671876
Ixiasoft
1. Intel® FPGA AI Suite PCIe-based Design Example User Guide
2. About the PCIe* -based Design Example
3. Getting Started with the Intel® FPGA AI Suite PCIe* -based Design Example
4. Building the Intel® FPGA AI Suite Runtime
5. Running the Design Example Demonstration Applications
6. Design Example Components
7. Design Example System Architecture for the Intel PAC with Intel® Arria® 10 GX FPGA
A. Intel® FPGA AI Suite PCIe-based Design Example User Guide Archives
B. Intel® FPGA AI Suite PCIe-based Design Example User Guide Document Revision History
5.1. Exporting Trained Graphs from Source Frameworks
5.2. Compiling Exported Graphs Through the Intel FPGA AI Suite
5.3. Compiling the PCIe* -based Example Design
5.4. Programming the FPGA Device ( Intel® Arria® 10)
5.5. Programming the FPGA Device ( Intel Agilex® 7)
5.6. Performing Accelerated Inference with the dla_benchmark Application
5.7. Running the Ported OpenVINO™ Demonstration Applications
Visible to Intel only — GUID: bim1661605671876
Ixiasoft
5.6.2.3. Example of Inference on Object Detection Graphs
The following example makes the below assumptions:
- The Model Optimizer IR graph.xml for either YOLOv3 or TinyYOLOv3 is in the current working directory.
- The validation images downloaded from the COCO website are placed in the ./mscoco-images directory.
- The JSON annotation file is downloaded and unzipped in the current directory.
To compute the accuracy scores on many images, you can usually increase the number of iterations using the flag -niter instead of a large batch size -b. The product of the batch size and the number of iterations should be less than or equal to the number of images that you provide.
cd $COREDLA_ROOT/runtime/build_Release python ./convert_annotations.py ./instances_val2017.json \ ./groundtruth ./dla_benchmark/dla_benchmark \ -b=1 \ -niter=5000 \ -m=./graph.xml \ -d=HETERO:FPGA,CPU \ -i=./mscoco-images \ -plugins_xml_file=./plugins.xml \ -arch_file=../../example_architectures/A10_Performance.arch \ -yolo_version=yolo-v3-tf \ -api=async \ -groundtruth_loc=./groundtruth \ -nireq=4 \ -enable_object_detection_ap \ -perf_est \ -bgr