FPGA AI Suite: Getting Started Guide

ID 768970
Date 9/06/2024
Public
Document Table of Contents

6.12.2. Preparing a COCO Validation Dataset and Annotations

Use the publicly available COCO 2017 validation images as input to the model and the COCO 2017 annotations as the ground-truth.

You can download the images from the following URL: http://images.cocodataset.org/zips/val2017.zip.

You can download the annotations from the following URL: http://images.cocodataset.org/annotations/annotations_trainval2017.zip.

  1. Build the runtime with the following commands:
    cd $COREDLA_WORK/runtime
    rm -rf build_Release
    ./build_runtime.sh -target_de10_agilex
  2. Download and extract both .zip files into the coco-images directory:
    cd $COREDLA_WORK/runtime
    mkdir coco-images
    cd coco-images
    wget http://images.cocodataset.org/zips/val2017.zip
    wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
    unzip annotations_trainval2017.zip
    unzip val2017.zip
  3. The dla_benchmark application allows only plain text ground truth files, so use the convert_annotations.py script to setup the groundtruth directory as follows:
    cd $COREDLA_WORK/runtime/coco-images
    mkdir groundtruth
    python3 \
      ../dla_benchmark/convert_annotations.py \
      annotations/instances_val2017.json \
      groundtruth