Intel® FPGA AI Suite: IP Reference Manual

ID 768974
Date 7/03/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

2. About the Intel® FPGA AI Suite IP

The Intel® FPGA AI Suite IP is an RTL-instantiable configurable IP with AXI interfaces that you can instantiate into a generic embedded FPGA system.

The IP is configured through parameters defined in an Architecture Description File. The Architecture Description File, along with the OpenVINO™ intermediate representation of your trained model, is compiled by the Intel® FPGA AI Suite compiler into configuration instructions for the IP.

The following diagram shows a high-level architecture of the Intel® FPGA AI Suite IP.

Figure 1. High-Level Architecture of the Intel® FPGA AI Suite IP

The primary parameters defined in an Architecture Description File cover the following properties:

  • PE array vectorization
  • Scratch pad sizing
  • External memory bus bandwidth
  • Types/vectorization of auxiliary layer blocks

The following diagram is an architecture diagram for a specific instantiation of the Intel® FPGA AI Suite IP. The blocks connected to the crossbar in this diagram are examples. The selection of blocks connected to the crossbar are determined by compile time parameters.

Figure 2. Architecture of an Example Instantiation of the Intel® FPGA AI Suite IP
Two teams are typically involved in the implementation of an AI feature:
  • A machine learning (ML) team responsible for developing and delivering an AI model.
  • An FPGA team responsible for integrating the Intel® FPGA AI Suite IP and runtime together into a system.

Defining the IP architecture straddles the boundary between these two teams. The ML team must develop an AI model that meets the target performance in some parameterization of the configurable IP. The FPGA team must ensure it fits onto the FPGA and closes timing.

Although responsibility for defining the parameterization of the configurable architecture can lie with either team (but is a joint responsibility), it might be easiest for the ML team to define the architecture.

The team responsible for defining the IP parameterization can use the Intel® FPGA AI Suite compiler (dla_compiler command) area and performance estimator tools to guide their decisions. The Intel® FPGA AI Suite Compiler Reference Manual describes how to use the dla_compiler tool.

In addition to the FPGA team and the ML team, another team is likely responsible for the software integration on the host processor. Depending on the system details, this software is likely responsible for interfacing with OpenVINO™ and communicating (via the BSP) with the Intel® FPGA AI Suite IP. This software will likely be based on the runtime system that is included with the PCIe Example Design or possibly based on an SOC Example Design.