Intel® FPGA AI Suite: Getting Started Guide

ID 768970
Date 2/02/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

2. Intel® FPGA AI Suite Components

The Intel® FPGA AI Suite consists of several components to help you enable AI on your Intel FPGAs.

The Intel® FPGA AI Suite consists of the following components:

  • Compiler (dla_compiler command)
    The Intel® FPGA AI Suite compiler is a multipurpose tool that you can use for the following tasks with the Intel® FPGA AI Suite:
    • Generate architectures

      Use the compiler to generate an IP parameterization that is optimized for a given machine learning (ML) model or set of models, while attempting to fit the Intel FPGA AI Suite IP block into a given resource footprint.

    • Estimate IP performance

      Use the compiler to get an estimate of the performance of the AI IP block for a given parameterization.

    • Estimate IP FPGA resource consumption

      Use the compiler to get an estimate of the FPGA resources (ALMs, M20k blocks, and DSPs) required for a given IP parameterization.

    • Create an ahead-of-time compiled graph

      Create a compiled version of an ML model that includes the instructions necessary to control the IP on the FPGA. The compiled binary includes both the instructions to control the IP during inference and the model weights. This compiled model is suitable for use by the Example Designs or in a production deployment.

    An OpenVINO™ plugin is distributed with the compiler that allows the PCIe Example Design to access the compiler in a just-in-time fashion.

  • IP generation tool

The Intel® FPGA AI Suite IP generation tool customizes the Intel® FPGA AI Suite IP based on an input architecture file. The generated IP is placed into an IP library that you can import into an FPGA design with Platform Designer, use directly in a pure RTL design, or both.

  • Example designs
The following design examples show how you can use the Intel® FPGA AI Suite IP:
  • PCIe-based design example for Intel® Arria® 10 devices

    This design example is based on the Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 GX FPGA and uses the OPAE software stack. This example is distributed with the Intel® FPGA AI Suite.

    This example highlights how to instantiate the Intel® FPGA AI Suite IP and provides an example runtime that shows you how to control the Intel® FPGA AI Suite IP and pass the model configuration to the IP to program it and run inference on a neural network.

  • PCIe-based design example for Intel Agilex® 7 devices

    This design example is similar to the PCIe-based Intel® Arria® 10 design example, except that the design targets the Terasic* DE10-Agilex Development Board (DE10-Agilex-B2E2). This example is distributed with the Intel® FPGA AI Suite.

  • SoC design example

    This design example is based on the Intel® Arria® 10 SX SoC FPGA Development Kit. This example is distributed with the Intel® FPGA AI Suite.

    This example demonstrates how to use the ARM* SoC as a host for the Intel® FPGA AI Suite, as well as how to perform inference in a streaming data flow.

  • ARM-based design example

    This design example is based on using an Intel® Arria® 10 SoC FPGA.

    This example highlights how to use the Intel® FPGA AI Suite IP in a fully embedded FPGA design, with direct camera input and a video IP pipeline connected to the Intel® FPGA AI Suite IP.

    This example design is available separately and is not provided with the Intel® FPGA AI Suite.

The following diagram illustrates the connection between two primary components of the Intel® FPGA AI Suite – the compiler and the IP:
Figure 1. Connection Between Intel® FPGA AI Suite Compiler and IP