FPGA AI Suite: Getting Started Guide

ID 768970
Date 12/16/2024
Public
Document Table of Contents

6.5. Preparing an Image Set

This section describes how to prepare an image set for classification graphs that requires 224x224 input and have been trained on the ImageNet classifications. For the yolo-v3-tf and yolo-v3-tiny-tf graphs, the instructions in the FPGA AI Suite PCIe Example Design User Guide describe how to prepare an image set, ground truth data, and how to call the dla_benchmark application.

The FPGA AI Suite includes sample validation images in the $COREDLA_ROOT/demo/sample_images/ folder.

The contents are as follows:

$COREDLA_WORK/demo/sample_images/

Sample image directory.

A file naming convention is used to provide a sort order for the images.

$COREDLA_WORK/demo/sample_images/ground_truth.txt

Ground truth file.

$COREDLA_WORK/demo/sample_images/TF_ground_truth.txt

Ground truth file suitable for graphs from the TensorFlow framework.

Image classification graphs trained in the TensorFlow framework require ground truth files that account for a difference in how TensorFlow numbers the output categories (an off-by-one difference). The sample image set includes two ground truth files to account for this.

Due to the small number of images in the sample image set, the dla_benchmark commands in this section for the sample image set -niter=8. This setting limits the number of inferences executed by the FPGA AI Suite IP.

To help performance benchmarking, the dla_benchmark demo generates random image data if it requires more input images than are available in the image set.

Optionally, you can run the demo with the ILSVRC2012 image set. To use this image set, manually download it as follows:

  1. The image set is available from https://image-net.org/download-images.php.
  2. Download the 2012 “Validation images” and the 2017 development kit.
  3. Convert the 2012 images to .bmp format, resizing the smallest dimension to 256 while preserving aspect ratio and center crop to 224x224.

The image preprocessing can be done with many different tools. One of the common Linux tools is ImageMagick. For details, refer to https://imagemagick.org/.

The following commands use ImageMagick to prepare the images:
mkdir imagenet_jpg
cd imagenet_jpg
tar xf /path/to/ILSVRC2012_img_val.tar
mkdir ../imagenet
echo *.JPEG | xargs mogrify \
	-path ../imagenet \
	-resize '256x256^' \
	-gravity Center \
	-crop 224x224+0+0 \
	-format bmp
Create the TF_ground_truth.txt file from the 2017 development with these commands:
tar zxf ILSVRC2017_devkit.tar.gz
cd ILSVRC/devkit/data
export cnt=1
cp ILSVRC2015_clsloc_validation_ground_truth.txt TF_ground_truth.txt
cat map_clsloc.txt | sort | (
    while read a; do
        orig=$(echo $a | awk '{print $2}')
        sed -i -e "s/^$orig\$/_$cnt/" TF_ground_truth.txt
        cnt=$(($cnt+1))
    done
)
sed -i -e 's/^_//' TF_ground_truth.txt

If you want the demo to use the ImageNet images instead of the sample images bundled with the FPGA AI Suite make the following changes to the dla_benchmark example commands in Performing Inference on the PCIe-Based Example Design:

  1. Set -niter=5000 instead of -niter=8.
  2. Set -groundtruth_loc to specify the TF_ground_truth.txt created earlier.
  3. Set -i to specify the directory containing the converted ILSVRC2012 .bmp images.

    The .bmp format is lossless and preserves as much details as possible.