Visible to Intel only — GUID: nsv1539647338965
Ixiasoft
1. Introduction
2. IP Architecture and Functional Description
3. Parameters
4. Interfaces
5. Advanced Features
6. Testbench
7. Troubleshooting/Debugging
8. Intel® P-tile Avalon® Streaming IP for PCI Express* User Guide Archives
9. Document Revision History for the P-Tile Avalon® Streaming Intel® FPGA IP for PCI Express* User Guide
A. Configuration Space Registers
B. Root Port Enumeration
C. Implementation of Address Translation Services (ATS) in Endpoint Mode
D. Packets Forwarded to the User Application in TLP Bypass Mode
E. Using the Avery BFM for Intel P-Tile PCI Express Gen4 x16 Simulations
F. Bifurcated Endpoint Support for Independent Warm Resets
3.2.3.1. Device Capabilities
3.2.3.2. VirtIO Parameters
3.2.3.3. Link Capabilities
3.2.3.4. Legacy Interrupt Pin Register
3.2.3.5. MSI Capabilities
3.2.3.6. MSI-X Capabilities
3.2.3.7. Slot Capabilities
3.2.3.8. Latency Tolerance Reporting (LTR)
3.2.3.9. Process Address Space ID (PASID)
3.2.3.10. Device Serial Number Capability
3.2.3.11. Page Request Service (PRS)
3.2.3.12. Access Control Service (ACS) Capabilities
3.2.3.13. Power Management
3.2.3.14. Vendor Specific Extended Capability (VSEC) Registers
3.2.3.15. TLP Processing Hints (TPH)
3.2.3.16. Address Translation Services (ATS) Capabilities
4.1. Overview
4.2. Clocks and Resets
4.3. Serial Data Interface
4.4. Avalon-ST Interface
4.5. Hard IP Status Interface
4.6. Interrupt Interface
4.7. Error Interface
4.8. Hot Plug Interface (RP Only)
4.9. Power Management Interface
4.10. Configuration Output Interface
4.11. Configuration Intercept Interface (EP Only)
4.12. Hard IP Reconfiguration Interface
4.13. PHY Reconfiguration Interface
4.14. Page Request Service (PRS) Interface (EP Only)
4.4.1. TLP Header and Data Alignment for the Avalon-ST RX and TX Interfaces
4.4.2. Avalon® -ST RX Interface
4.4.3. Avalon® -ST RX Interface rx_st_ready Behavior
4.4.4. RX Flow Control Interface
4.4.5. Avalon® -ST TX Interface
4.4.6. Avalon® -ST TX Interface tx_st_ready Behavior
4.4.7. TX Flow Control Interface
4.4.8. Tag Allocation
5.2.2.5.1. VirtIO Common Configuration Capability Register (Address: 0x012)
5.2.2.5.2. VirtIO Common Configuration BAR Indicator Register (Address: 0x013)
5.2.2.5.3. VirtIO Common Configuration BAR Offset Register (Address: 0x014)
5.2.2.5.4. VirtIO Common Configuration Structure Length Register (Address 0x015)
5.2.2.5.5. VirtIO Notifications Capability Register (Address: 0x016)
5.2.2.5.6. VirtIO Notifications BAR Indicator Register (Address: 0x017)
5.2.2.5.7. VirtIO Notifications BAR Offset Register (Address: 0x018)
5.2.2.5.8. VirtIO Notifications Structure Length Register (Address: 0x019)
5.2.2.5.9. VirtIO Notifications Notify Off Multiplier Register (Address: 0x01A)
5.2.2.5.10. VirtIO ISR Status Capability Register (Address: 0x02F)
5.2.2.5.11. VirtIO ISR Status BAR Indicator Register (Address: 0x030)
5.2.2.5.12. VirtIO ISR Status BAR Offset Register (Address: 0x031)
5.2.2.5.13. VirtIO ISR Status Structure Length Register (Address: 0x032)
5.2.2.5.14. VirtIO Device Specific Capability Register (Address: 0x033)
5.2.2.5.15. VirtIO Device Specific BAR Indicator Register (Address: 0x034)
5.2.2.5.16. VirtIO Device Specific BAR Offset Register (Address 0x035 )
5.2.2.5.17. VirtIO Device Specific Structure Length Register (Address: 0x036)
5.2.2.5.18. VirtIO PCI Configuration Access Capability Register (Address: 0x037)
5.2.2.5.19. VirtIO PCI Configuration Access BAR Indicator Register (Address: 0x038)
5.2.2.5.20. VirtIO PCI Configuration Access BAR Offset Register (Address: 0x039)
5.2.2.5.21. VirtIO PCI Configuration Access Structure Length Register (Address: 0x03A)
5.2.2.5.22. VirtIO PCI Configuration Access Data Register (Address: 0x03B)
6.3.5.1. ebfm_barwr Procedure
6.3.5.2. ebfm_barwr_imm Procedure
6.3.5.3. ebfm_barrd_wait Procedure
6.3.5.4. ebfm_barrd_nowt Procedure
6.3.5.5. ebfm_cfgwr_imm_wait Procedure
6.3.5.6. ebfm_cfgwr_imm_nowt Procedure
6.3.5.7. ebfm_cfgrd_wait Procedure
6.3.5.8. ebfm_cfgrd_nowt Procedure
6.3.5.9. BFM Configuration Procedures
6.3.5.10. BFM Shared Memory Access Procedures
6.3.5.11. BFM Log and Message Procedures
6.3.5.12. Verilog HDL Formatting Functions
6.3.5.11.1. ebfm_display Verilog HDL Function
6.3.5.11.2. ebfm_log_stop_sim Verilog HDL Function
6.3.5.11.3. ebfm_log_set_suppressed_msg_mask Task
6.3.5.11.4. ebfm_log_set_stop_on_msg_mask Verilog HDL Task
6.3.5.11.5. ebfm_log_open Verilog HDL Function
6.3.5.11.6. ebfm_log_close Verilog HDL Function
A.3.1. Intel-Defined VSEC Capability Header (Offset 00h)
A.3.2. Intel-Defined Vendor Specific Header (Offset 04h)
A.3.3. Intel Marker (Offset 08h)
A.3.4. JTAG Silicon ID (Offset 0x0C - 0x18)
A.3.5. User Configurable Device and Board ID (Offset 0x1C - 0x1D)
A.3.6. General Purpose Control and Status Register (Offset 0x30)
A.3.7. Uncorrectable Internal Error Status Register (Offset 0x34)
A.3.8. Uncorrectable Internal Error Mask Register (Offset 0x38)
A.3.9. Correctable Internal Error Status Register (Offset 0x3C)
A.3.10. Correctable Internal Error Mask Register (Offset 0x40)
Visible to Intel only — GUID: nsv1539647338965
Ixiasoft
2.1.1. Clock Domains
The P-Tile IP for PCI Express* has three primary clock domains:
- PHY clock domain (i.e. core_clk domain): this clock is synchronous to the SerDes parallel clock.
- EMIB/FPGA fabric interface clock domain (i.e. pld_clk domain): this clock is derived from the same reference clock (refclk0) as the one used by the SerDes. However, this clock is generated from a stand-alone core PLL.
- Application clock domain (coreclkout_hip): this clock is an output from the P-Tile IP, and it has the same frequency as pld_clk.
Figure 2. Clock Domains
The PHY clock domain (i.e. core_clk domain) is a dynamic frequency domain. The PHY clock frequency is dependent on the current link speed.
Link Speed | PHY Clock Frequency | Application Clock Frequency |
---|---|---|
Gen1 | 125 MHz | Gen1 is supported only via link down-training and not natively. Hence, the application clock frequency depends on the configuration you choose in the IP Parameter Editor. For example, if you choose a Gen3 configuration, the application clock frequency is 250 MHz. |
Gen2 | 250 MHz | Gen2 is supported only via link down-training and not natively. Hence, the application clock frequency depends on the configuration you choose in the IP Parameter Editor. For example, if you choose a Gen3 configuration, the application clock frequency is 250 MHz. |
Gen3 | 500 MHz | 250 MHz |
Gen4 | 1000 MHz | 175 MHz / 200 MHz / 225 MHz / 350 MHz / 400 MHz / 450 MHz ( Intel® Stratix® 10 DX) 175 MHz / 200 MHz / 225 MHz / 250 MHz / 350 MHz / 400 MHz / 450 MHz / 500 MHz ( Intel® Agilex™ ) |
Note: For a link down-training scenario when P-tile is configured at Gen3 or Gen4 and the link gets down-trained to a lower speed, the application clock frequency will continue to run at the configured frequency set in the PLD Clock Frequency parameter. For example, the PCIe Hard IP Mode parameter is set as a Gen4 1x16 and the PLD Clock Frequency parameter as 450 MHz, the PLD clock frequency will continue to run at 450 MHz even if the Link is down-trained to Gen3 or less.