FPGA AI Suite: Getting Started Guide

ID 768970
Date 11/25/2024
Public
Document Table of Contents

6.12.3. Inference on YOLOv3 and Calculating Accuracy Scores

To run inference on YOLOv3 and calculate the mAP and COCO AP scores, run the following commands:
cd $COREDLA_WORK/runtime
./build_Release/dla_benchmark/dla_benchmark \ 
   -b=1 \ 
   -niter=5000 \ 
   -m $COREDLA_WORK/demo/models/public/yolo-v3-tf/FP32/yolo-v3-tf.xml \ 
   -d=HETERO:FPGA,CPU \ 
   -i=./coco-images/val2017 \ 
   -plugins=./plugins.xml \ 
   -arch_file=$COREDLA_ROOT/example_architectures/AGX7_Performance.arch \ 
   -yolo_version=yolo-v3-tf \ 
   -api=async \ 
   -groundtruth_loc=./coco-images/groundtruth \ 
   -nireq=4 \ 
   -enable_object_detection_ap \ 
   -perf_est \ 
   -bgr 

The dla_benchmark command prints the mAP and COCO AP scores and saves a text file called ap_report.txt that contains the scores in the current working directory.

To enable the accuracy checking routine for object detection graphs such as YOLOv3, use the -enable_object_detection_ap=1 option of the dla_benchmark command. This flag directs the dla_benchmark command to calculate the mAP and COCO AP for object detection graphs.

Also, specify the version of the YOLO graph that you provide to the dla_benchmark command with the -yolo_version= yolo-v3-tf option.

The input images folder is specified with -i=./coco-images and the ground truth annotations is specified with -groundtruth_loc=./groundtruth. If you chose to save the images or the ground truth annotations to a location other than the ones specified in this tutorial, update these parameters to point to the correct location.