High Bandwidth Memory (HBM2E) Interface Intel Agilex® 7 M-Series FPGA IP User Guide

ID 773264
Date 4/21/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

3.2. Intel Agilex® 7 M-Series Hard Memory NoC Subsystem

The Intel Agilex® 7 M-series FPGAs are integrated with a hard memory network-on-chip (NoC), which is a hardened block primarily intended to facilitate high-bandwidth data movement from the FPGA core fabric to periphery and vice-versa.
Note: As a prerequisite to this content, please review the first three chapters of the Intel Agilex® 7 M-Series FPGA Network-on-Chip (NoC) User Guide .

The hard memory NoC is implemented as two high-speed interconnect NoC subsystems which run horizontally along the top and bottom edges of the die.

Each high-speed interconnect NoC subsystem consists of a PLL, a subsystem manager (SSM), and multiple NoC segments that contain the following:

  • InitiatorInitiator NoC Interface Unit . An AXI bridge onto the NoC. An initiator is an AXI slave that converts AXI commands from a design in the FPGA fabric into NoC requests and converts NoC responses back into AXI responses to the user's design.
  • TargetTarget NoC Interface Unit . A bridge from the NoC to an AXI target in the periphery of the FPGA device. This is a NoC target and acts as an AXI master to the peripheral, such as a memory controller.
  • Switch – act as routers for the requests.

M-series FPGAs include two separate high-speed interconnect NoC subsystems, identified as top and bottom. The top and bottom NoC subsystem interfaces with UIB, IO96B, HPS and core fabric. With the fabric NoC option, the AXI read response data is routed into the fabric via M20K columns instead of being delivered to initiators read data port. This reduces fabric routing utilization and improves read response data throughput.

NoC Subsystem for HBM2E Read-Write Operations

In Intel Agilex® 7 M-series FPGAs, fabric access to the HBM2E DRAM memory is exclusively via the hard memory NoC. Direct fabric access to and from the HBM2E DRAM is not supported, as illustrated in Figure 1.

The HBM2E DRAM memory connects to the UIB through the EMIB. The UIB interfaces with user logic via the integrated hardened memory NoC using the AXI4 protocol. The hard memory NoC enables a single AXI master in the user logic to access data in multiple pseudo-channels.

Figure 3. Abstract Block Diagram of HBM2E IP Components
Note: The above figure is an abstract diagram and is not intended to accurately represent the physical layout of the related blocks.

As illustrated in Figure 1, the read and write transactions are routed from the FPGA’s user logic to the initiators through their AXI interfaces. The initiators face the fabric and are part of the hard memory NoC block. Initiators convert the AXI-based transactions to the packet-based protocol used by the hard memory NoC. The transactions are routed to their appropriate targets, through the hard memory NoC. The targets convert the packet-based traffic back to AXI. The targets face the UIB and connect to each of the controller's pseudo-channels. From the HBM controller channel the transactions are transferred to the HBM2E DRAM through the PHY and I/Os.