Visible to Intel only — GUID: ilx1615854170531
Ixiasoft
1. F-tile Overview
2. F-tile Architecture
3. Implementing the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
4. Implementing the F-Tile Reference and System PLL Clocks Intel® FPGA IP
5. F-tile PMA/FEC Direct PHY Design Implementation
6. Supported Tools
7. Debugging F-Tile Transceiver Links
8. F-tile Architecture and PMA and FEC Direct PHY IP User Guide Archives
9. Document Revision History for F-tile Architecture and PMA and FEC Direct PHY IP User Guide
A. Appendix
2.1.1. FHT and FGT PMAs
2.1.2. 400G Hard IP and 200G Hard IP
2.1.3. PMA Data Rates
2.1.4. FEC Architecture
2.1.5. PCIe* Hard IP
2.1.6. Bonding Architecture
2.1.7. Deskew Logic
2.1.8. Embedded Multi-die Interconnect Bridge (EMIB)
2.1.9. IEEE 1588 Precision Time Protocol for Ethernet
2.1.10. Clock Networks
2.1.11. Reconfiguration Interfaces
2.2.1. PMA-to-Fracture Mapping
2.2.2. Determining Which PMA to Map to Which Fracture
2.2.3. Hard IP Placement Rules
2.2.4. IEEE 1588 Precision Time Protocol Placement Rules
2.2.5. Topologies
2.2.6. FEC Placement Rules
2.2.7. Clock Rules and Restrictions
2.2.8. Bonding Placement Rules
2.2.9. Preserving Unused PMA Lanes
2.2.2.1. Implementing One 200GbE-4 Interface with 400G Hard IP and FHT
2.2.2.2. Implementing One 200GbE-2 Interface with 400G Hard IP and FHT
2.2.2.3. Implementing One 100GbE-1 Interface with 400G Hard IP and FHT
2.2.2.4. Implementing One 100GbE-4 Interface with 400G Hard IP and FGT
2.2.2.5. Implementing One 10GbE-1 Interface with 200G Hard IP and FGT
2.2.2.6. Implementing Three 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.7. Implementing One 50GbE-1 and Two 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.8. Implementing One 100GbE-1 and Two 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.9. Implementing Two 100GbE-1 and One 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.10. Implementing 100GbE-1, 100GbE-2, and 50GbE-1 Interfaces with 400G Hard IP and FHT
3.1. F-Tile PMA/FEC Direct PHY Intel® FPGA IP Overview
3.2. Designing with F-Tile PMA/FEC Direct PHY Intel® FPGA IP
3.3. Configuring the IP
3.4. Signal and Port Reference
3.5. Bit Mapping for PMA and FEC Mode PHY TX and RX Datapath
3.6. Clocking
3.7. Custom Cadence Generation Ports and Logic
3.8. Asserting Reset
3.9. Bonding Implementation
3.10. Independent Port Configurations
3.11. Configuration Registers
3.12. Configurable Intel® Quartus® Prime Software Settings
3.13. Configuring the F-Tile PMA/FEC Direct PHY Intel® FPGA IP for Hardware Testing
3.14. Hardware Configuration Using the Avalon® Memory-Mapped Interface
3.4.1. TX and RX Parallel and Serial Interface Signals
3.4.2. TX and RX Reference Clock and Clock Output Interface Signals
3.4.3. Reset Signals
3.4.4. RS-FEC Signals
3.4.5. Custom Cadence Control and Status Signals
3.4.6. TX PMA Control Signals
3.4.7. RX PMA Status Signals
3.4.8. TX and RX PMA and Core Interface FIFO Signals
3.4.9. PMA Avalon® Memory Mapped Interface Signals
3.4.10. Datapath Avalon® Memory Mapped Interface Signals
3.5.1. Parallel Data Mapping Information
3.5.2. TX and RX Parallel Data Mapping Information for Different Configurations
3.5.3. Example of TX Parallel Data for PMA Width = 8, 10, 16, 20, 32 (X=1)
3.5.4. Example of TX Parallel Data for PMA width = 64 (X=2)
3.5.5. Example of TX Parallel Data for PMA width = 64 (X=2) for FEC Direct Mode
3.8.1. Reset Signal Requirements
3.8.2. Power On Reset Requirements
3.8.3. Reset Signals—Block Level
3.8.4. Reset Signals—Descriptions
3.8.5. Status Signals—Descriptions
3.8.6. Run-time Reset Sequence—TX
3.8.7. Run-time Reset Sequence—RX
3.8.8. Run-time Reset Sequence—TX + RX
3.8.9. Run-time Reset Sequence—TX with FEC
5.1. Implementing the F-tile PMA/FEC Direct PHY Design
5.2. Instantiating the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
5.3. Implementing a RS-FEC Direct Design in the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
5.4. Instantiating the F-Tile Reference and System PLL Clocks Intel® FPGA IP
5.5. Enabling Custom Cadence Generation Ports and Logic
5.6. Connecting the F-tile PMA/FEC Direct PHY Design IP
5.7. Simulating the F-Tile PMA/FEC Direct PHY Design
5.8. F-tile Interface Planning
Visible to Intel only — GUID: ilx1615854170531
Ixiasoft
5.5. Enabling Custom Cadence Generation Ports and Logic
This F-tile PMA/FEC Direct PHY design uses System PLL clocking mode to clock the digital datapath of the FGT PMA lane. Because the system PLL frequency (830.078125MHz) is greater than the PMA clock frequency (805.6640625MHz), you must enable custom cadence generation logic ports, and enable the logic option in the IP parameter editor.
- You must use tx_cadence port output to assert and de-assert the TX PMA Interface data valid bit (one of the bits in TX parallel data). Refer to Parallel Data Mapping Information.
- You must connect tx_cadence_fast_clk to tx_clkout/tx_clkout2 with clock source System PLL Clock / 2 (415.0390625MHz).
- You must connect tx_cadence_slow_clk to tx_clkout/tx_clkout2 with clock source Word clock or Bond clock / 2 (402.83203125 MHz)
Figure 105. Enabling Custom Cadence Generation Ports and Logic
Rate Match FIFO Requirement
The following guidelines apply to the elastic FIFO requirement between user FPGA core logic and the F-Tile PMA/FEC Direct PHY Intel® FPGA IP:
- If the user FPGA core logic is running at same frequency as system PLL frequency/2 (that is, 415.0390625MHz), then there is no elastic FIFO requirement between the user FPGA core logic and the F-Tile PMA/FEC Direct PHY Intel® FPGA IP.
- If the user FPGA core logic is running at PMA clock frequency/2 (that is, 402.83203125 MHz), this requires elastic FIFO between the user FPGA core logic and the F-tile core interface FIFO to transfer from PMA clock frequency domain to system PLL clock frequency domain and must be implemented by the user.