FPGA AI Suite: IP Reference Manual

ID 768974
Date 9/06/2024
Public
Document Table of Contents

2.5.2.13. (Early Access) Parameter Group: layout_transform_params

These parameters configure the input tensor layout transformation module of the FPGA AI Suite IP.

Early access only: This feature has early access support only. Full support for this feature is planned for a future release.

Parameter: layout_transform_params/do_u8_fp16_conversion

When true, this parameter enables hardware to convert 8-bit integer input values to FP16 format, and 8-bit unsigned integers must be given as inputs. Otherwise, no conversion is done and you must write FP16 values at the input.

Legal values:
[true, false]

Parameters: layout_transform_params/channels, layout_transform_params/feature_height, layout_transform_params/feature_width, layout_transform_params/feature_depth, layout_transform_params/stride_height, layout_transform_params/stride_width, layout_transform_params/stride_depth, layout_transform_params/pad_top, layout_transform_params/pad_left, layout_transform_params/pad_depth, layout_transform_params/output_channels, layout_transform_params/output_height, layout_transform_params/output_width, layout_transform_params/output_depth

This group configures the range of feature shapes, padding, and convolution strides that the layout transform hardware module supports.
Early access restriction: In the early access release of this feature, these values must be set to exactly match the parameters of the ML model's first convolution instruction. In a future release of FPGA AI Suite, the values in this configuration represent the maximum (rather than exact) values, for greater flexibility.

To perform the input tensor format transformation and folding operations in hardware, the feature dimensions, along with the stride and padding values of the first convolution operation in the model, are required. Folded output dimensions are also required where folding refers to the process of moving values from the input tensor that are part of the same convolution filter stride into the channel dimension to increase efficiency in the PE array as described in Input Folding.

The folded output dimensions are derived from the input tensor according to the following relations:

For more information about the input tensor layout transform, refer to Input Feature Tensor In-Memory Format.