Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.
  1. Intel® Products
  2. Altera® FPGA, SoC FPGA and CPLD
  3. Altera® FPGA Development Tools
  4. FPGA AI Suite

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

FPGA AI Suite

The FPGA AI Suite enables FPGA designers, machine learning engineers, and software developers to create optimized FPGA AI platforms efficiently. Utilities in the suite speed up FPGA development for AI inference using familiar and popular industry frameworks such as TensorFlow or PyTorch and OpenVINO toolkit, while also leveraging robust and proven FPGA development flows with the Quartus Prime Software.

Read the FPGA AI Suite: Getting Started Guide ›

  • Overview
  • Documentation
  • Downloads
  • Videos

Benefits

High Performance

Agilex™ 7 FPGA M-Series can achieve a maximum theoretical performance of 88.5 INT8 TOPS, or 3,679 Resnet-50 frames per second at 90% FPGA utilization.1

Low Total Cost of Ownership with Easy System Integration

Integrate AI IP with other system-level components to achieve smaller footprint, lower power, and latency.

AI Front End Support

Use your favorite AI front end such as TensorFlow, Caffe, PyTorch, MXNet, Keras, and ONNX.

Simple and Standard Flows

Create and add AI inference IP to current or new FPGA designs with Quartus Prime Software or Platform Designer.

Access to Pre-Trained Models

FPGA AI Suite supports most of the models in Open Model Zoo.

Seamless Pre-Trained Model Conversion

OpenVINO Toolkit converts models from most of the standard frameworks to intermediate representations.

Push-Button Optimized AI IP Generation

FPGA AI Suite seamlessly generates optimal AI inference IP from pre-trained AI model sweeping the design space for optimal resources to performance targets.

Hardware-less Early Model Validation

Bit-accurate2 software emulation of the AI inference IP is available through the OpenVINO plugin interface enabling quicker evaluation of the accuracy of the model without hardware.

Show more Show less
FPGA AI Inference IP Development Flow Diagram
Pre-trained Model Implementation Flow Diagram
FPGA Implementation Flow Diagram

FPGA AI Inference Development Flow

The development flow seamlessly combines a hardware and software workflow into a generic end-to-end AI workflow. The steps are as follows:

1. OpenVINO Model Optimizer converts your pre-trained model to intermediate representation network files (.xml) and weights, biases files (.bin).

2. FPGA AI Suite compiler is used to:

  • Provide estimated area or performance metrics for a given architecture file or produce an optimized architecture file. (Architecture refers to inference IP parameters such as size of PE array, precisions, activation functions, interface widths, window sizes, etc.)
  • Compile network files into a .bin file with network partitions for FPGA and CPU (or both) along with weights and biases.

3. The compiled .bin file is imported by the user inference application at runtime.

  • Runtime application programming interfaces (APIs) include Inference Engine API (runtime partition CPU and FPGA, schedule inference) and FPGA AI (DDR memory, FPGA hardware blocks).

4. Reference designs are available to demonstrate the basic operations of importing .bin and running inference on FPGA with supporting host CPUs (x86 and Arm processors) as well as hostless inference operations.

5. Software emulation of the FPGA AI Suite IP is accessible through the OpenVINO plugin interface enabling quicker evaluation of the accuracy of FPGA AI IP without access to hardware (available for Agilex™ 5 FPGA only).

Notes:

Devices supported:  Agilex™ 5 FPGA, Agilex™ 7 FPGA, Cyclone® 10 GX FPGA, Arria® 10 FPGA

Tested networks, layers, and activation functions3:

  • ResNet-50, MobileNet v1/v2/v3, YOLO v3, TinyYOLO v3, UNET, i3d
  • 2D Conv, 3D Conv, Fully Connected, Softmax, BatchNorm, EltWise Mult, Clamp
  • ReLU, PReLU, Tanh, Swish, Sigmoid, Reciprocal

System Level Architectures

FPGA AI Suite is flexible and configurable for a variety of system-level use cases. Figure 1. lists the typical ways to incorporate the FPGA AI Suite IP into a system. The use cases span different verticals, from optimized embedded platforms to applications with host CPUs (Intel® Core™ processors, Arm processors) to data center environments with Intel® Xeon® processors. It supports hostless designs and soft processors such as the Nios® V processors.

Figure 1: Typical Intel FPGA AI Suite System Topologies

Typical intel fpga ai suite system topologies diagram 4

CPU offload

AI Accelerator

Typical intel fpga ai suite system topologies diagram 3

Multi-function CPU offload

AI Accelerator + Additional Hardware Function

Typical intel fpga ai suite system topologies diagram 2

Ingest / Inline Processing + AI

AI Accelerator + Direct Ingest and Data Streaming

Typical intel fpga ai suite system topologies diagram 1

Embedded SoC FPGA + AI

AI Accelerator + Direct Ingest and Data Streaming + Hardware Function +

Embedded Arm or Nios® V Processors

Show more Show less

FPGA AI Design Guided Journey

Explore the interactive FPGA AI Design Guided Journey, which provides step-by-step guidance for developing AI Intellectual Property (IP) designs.

Start designing

Learn More About FPGAi

Browse the FPGAi resources, white papers, and success stories

Learn more

Why FPGAs Are Especially Good for Implementing AI?

Read about the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Read the white paper

Integrating and Deploying AI Models in Operating Room

VITEC uses FPGA AI Suite for Collaborating on AI Models for Medical Use

Read the case study

Compare Products

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.intel.com/PerformanceIndex​. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available ​updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary.

2

Minor rounding differences between software emulation and hardware will typically result in differences of less than two units of least precision (ULPs).

3

Contact Intel Sales for more information on networks and functions not listed.

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo