FPGA AI Suite: IP Reference Manual

ID 768974
Date 12/16/2024
Public
Document Table of Contents

2.6.4.5. Input Layout Transform Hardware

The input tensor layout transform and folding operations described in this section can be done on the FPGA AI Suite when the layout transform is enabled in the IP architecture file.

The hardware implementation assumes that the input tensors are in HWC format, and that the data elements are either FP16 or U8 format. The hardware implementation of the input transform supports input folding for any feature, stride, and padding values.

When active, the layout transform hardware folds the input tensor and converts it to the CHWCvec format as described in Input Feature Tensor In-Memory Format. If configured for U8 inputs, the data elements are also converted to FP16 format before tensors are sent downstream for inference.

Use the hardware layout transform with the --ffolding_option 1 compiler option described in "Compilation Options (dla_compiler Command Options)" in the FPGA AI Suite Compiler Reference Manual . The layout transform hardware does not currently support multi-batch inputs (N>1) or 5-dimensional input tensors. Scale and shift values are also not applied in the hardware layout transform. You must apply scale and shift values to inputs before inferencing.