Visible to Intel only — GUID: jcs1665161357801
Ixiasoft
Visible to Intel only — GUID: jcs1665161357801
Ixiasoft
2.1. High-Level Architecture
Intel Agilex® 7 M-Series FPGAs provide two hard memory NoC subsystems that run horizontally along the top and bottom edges of the FPGA die. These subsystems are completely independent, and each subsystem interfaces with a separate set of peripherals. These horizontal networks spread memory bandwidth across the edge of the device, making it easier to saturate the memory bandwidth while avoiding routing congestion. Because the NoC is hard logic, it also reduces the need for soft interconnect logic, leaving more room for other IP functions.
The clock control segment contains a PLL for clock generation and a sub-system manager (SSM) for configuration. Other NoC segments interface with general purpose I/O (GPIO-B) banks where you can implement external memory interfaces. There are also segments to interface with the Universal Interface Bus (UIB) that connects to in-package high-bandwidth memory. The NoC segments contain switches, NoC initiators, and NoC targets. For details on each segment, refer to NoC Segments.
High-speed 512-bit links interconnect the switches within the NoC segments. There are separate sets of links carrying traffic left-to-right, and right-to-left, within the hard memory NoC. Each set of links has separate links for transaction requests and transaction responses.
NoC initiators connect AXI4 managers in the FPGA fabric to the hard memory NoC. NoC targets connect subordinate hardened memory controllers to the hard memory NoC.
You can choose to have the initiator return read data to M20K memory cells in a column adjacent to the NoC initiator using a configuration known as a fabric NoC. Because the data transfers directly into the FPGA fabric, this fabric NoC configuration reduces congestion at the edge of the device. Additionally, this configuration doubles the AXI4 read data width, enabling your design to fully utilize the high bandwidth memory and external memory interfaces while running at a lower operating frequency.
HBM2e memory connects to targets through the Universal Interface Bus (UIB). All access between the FPGA fabric and HBM2e memory is through the hard memory NoC. Refer to the High Bandwidth Memory (HBM2E) Interface Intel Agilex® 7 FPGA IP User Guide for details on the HBM2e memory.
You can implement external memory protocols, such as DDR5, in GPIO-B I/O blocks. You can also use GPIO-B blocks for implementing other I/O functions.
You have the option of accessing external memory interfaces using the hard memory NoC, or directly from the FPGA fabric bypassing the NoC, depending on memory speeds, protocols and your design needs. Refer to the External Memory Interfaces Intel Agilex® 7 M-Series FPGA IP User Guide for details on external memory protocols supported in GPIO-B blocks and when to use the hard memory NoC or bypass mode. Other I/O functions that you implement in GPIO-B blocks do not connect to the hard memory NoC and always bypass it directly in the FPGA fabric. Note that functions that bypass the hard memory NoC may prevent the use of certain NoC initiator locations. For more information refer to GPIO-B Bypass Mode and Initiators.
The hard memory NoC along the top edge of the die also connects to a multi-port front end (MPFE) for the Hard Processor System (HPS). The MPFE is located in the segment immediately next to the HPS and allows the HPS to initiate transactions on the hard memory NoC. The NoC initiators in the MPFE are similar to the NoC initiators that interface to the FPGA fabric, but do not have the option to use the fabric NoC configuration which transfers read data directly into M20K memory blocks. Refer to the Intel Agilex® 7 Hard Processor System Technical Reference Manual for details on the HPS.
Each hard memory NoC subsystem consists of several NoC segments connected horizontally by high-speed networks. Figure 2. Intel Agilex® 7 M-Series Device Layout shows the high-level layout of hard memory NoC elements in Intel Agilex® 7 M-Series devices. Along the top and bottom edge of the die are GPIO-B blocks for implementing external memory interfaces and UIB blocks for interfacing to HBM2e memory. Adjacent to these are the NoC GPIO-B and UIB segments that make up the hard memory NoC. NoC PLL and SSM segments are in the upper left and lower left corners. The vertical arrows extending from these segments into the die represent the optional fabric NoCs using M20K memory blocks.
Additionally, there is a service network within the hard memory NoC segments that runs in parallel to the main switch network. This service network connects the NoC SSM and the HPS AXI4 Lite initiator to AXI4 Lite targets. You can use this service network for sideband configuration and monitoring.
The following document sections describe the hard memory NoC segments and the fabric NoCs.