FPGA AI Suite: Getting Started Guide

ID 768970
Date 3/29/2024
Public
Document Table of Contents

2. FPGA AI Suite Components

The FPGA AI Suite consists of several components to help you enable AI on your Intel FPGAs.

The FPGA AI Suite consists of the following components:

  • Compiler (dla_compiler command)
    The FPGA AI Suite compiler is a multipurpose tool that you can use for the following tasks with the FPGA AI Suite:
    • Generate architectures

      Use the compiler to generate an IP parameterization that is optimized for a given machine learning (ML) model or set of models, while attempting to fit the FPGA AI Suite IP block into a given resource footprint.

    • Estimate IP performance

      Use the compiler to get an estimate of the performance of the AI IP block for a given parameterization.

    • Estimate IP FPGA resource consumption

      Use the compiler to get an estimate of the FPGA resources (ALMs, M20k blocks, and DSPs) required for a given IP parameterization.

    • Create an ahead-of-time compiled graph

      Create a compiled version of an ML model that includes the instructions necessary to control the IP on the FPGA. The compiled binary includes both the instructions to control the IP during inference and the model weights. This compiled model is suitable for use by the Example Designs or in a production deployment.

    An OpenVINO™ plugin is distributed with the compiler that allows the PCIe Example Design to access the compiler in a just-in-time fashion.

  • IP generation tool

The FPGA AI Suite IP generation tool customizes the FPGA AI Suite IP based on an input architecture file. The generated IP is placed into an IP library that you can import into an FPGA design with Platform Designer, use directly in a pure RTL design, or both.

  • Example designs
The following design examples show how you can use the FPGA AI Suite IP:
  • PCIe-based design example for Arria® 10 devices

    This design example is based on the Intel® Programmable Acceleration Card (PAC) with Arria® 10 GX FPGA and uses the OPAE software stack. This example is distributed with the FPGA AI Suite.

    This example highlights how to instantiate the FPGA AI Suite IP and provides an example runtime that shows you how to control the FPGA AI Suite IP and pass the model configuration to the IP to program it and run inference on a neural network.

  • PCIe-based design example for Agilex™ 7 devices

    This design example is similar to the PCIe-based Arria® 10 design example, except that the design targets the Terasic* DE10-Agilex Development Board (DE10-Agilex-B2E2). This example is distributed with the FPGA AI Suite.

  • SoC design example for Arria® 10 devices

    This design example is based on the Arria® 10 SX SoC FPGA Development Kit. This example is distributed with the FPGA AI Suite.

    This example demonstrates how to use the ARM* SoC as a host for the FPGA AI Suite, as well as how to perform inference in a streaming data flow.

  • SoC design example for Agilex™ 7 devices

    This design example is based on the Agilex™ 7 FPGA I-Series Transceiver-SoC Development Kit. This example is distributed with the FPGA AI Suite.

    This example demonstrates how to use the ARM* SoC as a host for the FPGA AI Suite, as well as how to perform inference in a streaming data flow.

  • ARM-based design example

    This design example is based on using an Arria® 10 SoC FPGA.

    This example highlights how to use the FPGA AI Suite IP in a fully embedded FPGA design, with direct camera input and a video IP pipeline connected to the FPGA AI Suite IP.

    This example design is available separately and is not provided with the FPGA AI Suite.

The following diagram illustrates the connection between two primary components of the FPGA AI Suite – the compiler and the IP:
Figure 1. Connection Between FPGA AI Suite Compiler and IP