FPGA AI Suite: IP Reference Manual

ID 768974
Date 3/29/2024
Public
Document Table of Contents

2.4.2.11. (Early Access) Parameter Group: layout_transform_params

These parameters configure the input tensor layout transformation module of the FPGA AI Suite IP.

Early access only: This feature has early access support only for FPGA AI Suite 2024.1. Full support for this feature is planned for a future release.

Parameter: layout_transform_params/data_element_width

This parameter sets the width of the input values. The layout transform hardware supports U8 or FP16 inputs only.

Legal values:
[8, 16]

Parameters: layout_transform_params/channels, layout_transform_params/feature_height, layout_transform_params/feature_width, layout_transform_params/feature_depth, layout_transform_params/stride_height, layout_transform_params/stride_width, layout_transform_params/stride_depth, layout_transform_params/pad_top, layout_transform_params/pad_left, layout_transform_params/pad_depth, layout_transform_params/output_channels, layout_transform_params/output_height, layout_transform_params/output_width, layout_transform_params/output_depth

This group configures the range of feature shapes, padding, and convolution strides that the layout transform hardware module supports.
Early access restriction: In the early access release of this feature, these values must be set to exactly match the parameters of the ML model's first convolution instruction. In a future release of FPGA AI Suite, the values in this configuration represent the maximum (rather than exact) values, for greater flexibility.

To perform the input tensor format transformation and folding operations in hardware, the feature dimensions, along with the stride and padding values of the first convolution operation in the model, are required. Folded output dimensions are also required where folding refers to the process of moving values from the input tensor that are part of the same convolution filter stride into the channel dimension to increase efficiency in the PE array as described in Input Folding.

The folded output dimensions are derived from the input tensor according to the following relations:

For more information about the input tensor layout transform, refer to Input Feature Tensor In-Memory Format.