Visible to Intel only — GUID: mtd1706822806378
Ixiasoft
1. F-Tile Overview
2. F-Tile Architecture
3. Implementing the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
4. Implementing the F-Tile Reference and System PLL Clocks Intel® FPGA IP
5. F-Tile PMA/FEC Direct PHY Design Implementation
6. Supported Tools
7. Debugging F-Tile Transceiver Links
8. F-Tile Architecture and PMA and FEC Direct PHY IP User Guide Archives
9. Document Revision History for the F-Tile Architecture and PMA and FEC Direct PHY IP User Guide
A. Appendix
2.1.1. FHT and FGT PMAs
2.1.2. 400G Hard IP and 200G Hard IP
2.1.3. PMA Data Rates
2.1.4. FEC Architecture
2.1.5. PCIe* Hard IP
2.1.6. Bonding Architecture
2.1.7. Deskew Logic
2.1.8. Embedded Multi-die Interconnect Bridge (EMIB)
2.1.9. IEEE 1588 Precision Time Protocol for Ethernet
2.1.10. Clock Networks
2.1.11. Reconfiguration Interfaces
2.2.1. PMA-to-Fracture Mapping
2.2.2. Determining Which PMA to Map to Which Fracture
2.2.3. Hard IP Placement Rules
2.2.4. IEEE 1588 Precision Time Protocol Placement Rules
2.2.5. Topologies
2.2.6. FEC Placement Rules
2.2.7. Clock Rules and Restrictions
2.2.8. Bonding Placement Rules
2.2.9. Preserving Unused PMA Lanes
2.2.2.1. Implementing One 200GbE-4 Interface with 400G Hard IP and FHT
2.2.2.2. Implementing One 200GbE-2 Interface with 400G Hard IP and FHT
2.2.2.3. Implementing One 100GbE-1 Interface with 400G Hard IP and FHT
2.2.2.4. Implementing One 100GbE-4 Interface with 400G Hard IP and FGT
2.2.2.5. Implementing One 10GbE-1 Interface with 200G Hard IP and FGT
2.2.2.6. Implementing Three 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.7. Implementing One 50GbE-1 and Two 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.8. Implementing One 100GbE-1 and Two 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.9. Implementing Two 100GbE-1 and One 25GbE-1 Interfaces with 400G Hard IP and FHT
2.2.2.10. Implementing 100GbE-1, 100GbE-2, and 50GbE-1 Interfaces with 400G Hard IP and FHT
3.1. F-Tile PMA/FEC Direct PHY Intel® FPGA IP Overview
3.2. Designing with F-Tile PMA/FEC Direct PHY Intel® FPGA IP
3.3. Configuring the IP
3.4. Signal and Port Reference
3.5. Bit Mapping for PMA and FEC Mode PHY TX and RX Datapath
3.6. Clocking
3.7. Custom Cadence Generation Ports and Logic
3.8. Asserting Reset
3.9. Bonding Implementation
3.10. Independent Port Configurations
3.11. Configuration Registers
3.12. Configurable Quartus® Prime Software Settings
3.13. Configuring the F-Tile PMA/FEC Direct PHY Intel® FPGA IP for Hardware Testing
3.14. Hardware Configuration Using the Avalon® Memory-Mapped Interface
3.3.1. General and Common Datapath Options
3.3.2. TX Datapath Options
3.3.3. RX Datapath Options
3.3.4. RS-FEC (Reed Solomon Forward Error Correction) Options
3.3.5. Avalon® Memory Mapped Interface Options
3.3.6. Register Map IP-XACT Support
3.3.7. Example Design Generation
3.3.8. Analog Parameter Options
3.4.1. TX and RX Parallel and Serial Interface Signals
3.4.2. TX and RX Reference Clock and Clock Output Interface Signals
3.4.3. Reset Signals
3.4.4. RS-FEC Signals
3.4.5. Custom Cadence Control and Status Signals
3.4.6. TX PMA Control Signals
3.4.7. RX PMA Status Signals
3.4.8. TX and RX PMA and Core Interface FIFO Signals
3.4.9. PMA Avalon® Memory Mapped Interface Signals
3.4.10. Datapath Avalon® Memory Mapped Interface Signals
3.5.1. Parallel Data Mapping Information
3.5.2. TX and RX Parallel Data Mapping Information for Different Configurations
3.5.3. Example of TX Parallel Data for PMA Width = 8, 10, 16, 20, 32 (X=1)
3.5.4. Example of TX Parallel Data for PMA width = 64 (X=2)
3.5.5. Example of TX Parallel Data for PMA width = 64 (X=2) for FEC Direct Mode
3.8.1. Reset Signal Requirements
3.8.2. Power On Reset Requirements
3.8.3. Reset Signals—Block Level
3.8.4. Reset Signals—Descriptions
3.8.5. Status Signals—Descriptions
3.8.6. Run-time Reset Sequence—TX
3.8.7. Run-time Reset Sequence—RX
3.8.8. Run-time Reset Sequence—TX + RX
3.8.9. Run-time Reset Sequence—TX with FEC
4.1. IP Parameters
4.2. IP Port List
4.3. Mode of System PLL - System PLL Reference Clock and Output Frequencies
4.4. Guidelines for F-Tile Reference and System PLL Clocks Intel® FPGA IP Usage
4.5. Guidelines for Refclk #i is Active At and After Device Configuration
4.6. Guidelines for Obtaining the Lock Status and Resetting the FGT and FHT TX PLLs
5.1. Implementing the F-Tile PMA/FEC Direct PHY Design
5.2. Instantiating the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
5.3. Implementing a RS-FEC Direct Design in the F-Tile PMA/FEC Direct PHY Intel® FPGA IP
5.4. Instantiating the F-Tile Reference and System PLL Clocks Intel® FPGA IP
5.5. Enabling Custom Cadence Generation Ports and Logic
5.6. Connecting the F-Tile PMA/FEC Direct PHY Design IP
5.7. Simulating the F-Tile PMA/FEC Direct PHY Design
5.8. F-Tile Interface Planning
7.2.1. Modifying the Design to Enable F-Tile Transceiver Debug
7.2.2. Programming the Design into an Intel FPGA
7.2.3. Loading the Design to the Transceiver Toolkit
7.2.4. Creating Transceiver Links
7.2.5. Running BER Tests
7.2.6. Running Eye Viewer Tests
7.2.7. Running Link Optimization Tests
7.2.8. Checking FEC Statistics
7.2.9. Vertical Bathtub Curve Measurements (VBCM) Data
Visible to Intel only — GUID: mtd1706822806378
Ixiasoft
3.3.1.2. FGT PMA Configuration Rules for GPON Mode
You can implement the upstream GPON, XG(S)PON, 25G PON, and 50G asymmetric PON protocols with the F-Tile PMA/FEC Direct PHY Intel® FPGA IP by using the settings shown below:
- Set the FGT PMA configuration rules parameter to GPON.
- Set the Adaptation mode parameter to manual.
- Enable the fgt_rx_cdr_fast_freeze_sel port.
- Enable the fgt_rx_cdr_freeze port.
To achieve the best FGT RX performance when receiving the burst mode traffic, you must adhere to the following guidelines:
- You must make sure that addresses 0x62000[16] and 0x62004[12] are set to 1’b1.
Note: 0x62000 and 0x62004 are the offset addresses for lane 0.
- You must tie the fgt_rx_cdr_fast_freeze_sel signal to 1’b0.
- You must assert the fgt_rx_cdr_freeze signal when the burst disappears and deassert fgt_rx_cdr_freeze when the burst appears. For the timing relationship between the fgt_rx_cdr_freeze signal and bursts, refer to the following conditions:
- The transceiver input signal fgt_rx_cdr_freeze propagates to *ingress*231* with a latency about 10 ns.
- *ingress*231* is the internal signal that controls the CDR freeze or unfreeze logic.
- It is recommended that you align *ingress*231* signal assertion and deassertion with the rx_serial_data disappearing and reappearing.
- You can capture the *ingress*231* signal using Signal Tap via the path *__tiles|z*_x*_y*_n*__reset_controller|x_f_tile_soft_reset_ctlr_sip_v1|x_ftile_reset|rst_ctrl|iflux_ingress_direct_231
Signal Condition | Early | Late |
---|---|---|
Assertion of *ingress*231* | The CDR may not track the tail of the prior data resulting in higher BER. | The CDR can drift in frequency resulting in a longer lock time on the next burst. |
Deassertion of *ingress*231* | The CDR can drift in frequency resulting in a longer lock time on the next burst. | The start of the preamble can be missed resulting in a longer lock time on the current burst. |
Note: It is acceptable if you cannot perfectly align the *ingress*231* signal with the rx_serial_data. The FGT RX CDR can lock to the incoming burst fast enough within the preamble duration to meet the PON-related specifications.
- During the idle time (no active burst), the differential voltage at the FGT RX should be 0 instead of a negative value. This is to ensure the AC coupling capacitor can quickly charge up to a stable value when the burst arrives.
- If an optical line terminal (OLT) optical module is connected to FGT RX, then enable squelch should meet the 0 differential voltage requirement.
- If the FGT TX is connected to FGT RX, then enable TX electrical idle to meet the 0 differential voltage requirement. For PON applications with 32-bit PMA width:
- To enable TX electrical idle: set the tx_parallel_data bit[35] and bit[75] to 1’b1
- To disable TX electrical idle: set the tx_parallel_data bit[35] and bit[75] to 1’b0
- You must manually tune the RX EQ parameters: VGA gain, high frequency boost and DFE data tap 1.
Note: When the other parameters are fixed, a smaller RX input voltage swing requires a smaller VGA gain value. A larger RX input voltage swing requires a larger VGA gain value.
- You may need to manually tune the CDR gain parameters: proportional gain and integral gain, if the tuned RX EQ parameters cannot achieve the performance you require.
- When the fgt_rx_cdr_freeze signal asserts, the integral path is frozen while the proportional path is still active.
- Higher gain value for the proportional path and integral path helps the RX CDR to realign to the incoming data phase quicker but can create a higher jitter in the process.
- When the fgt_rx_cdr_freeze signal asserts, a higher gain value for the proportional path may speed up the drifting process and cause the CDR to be far away from the target phase alignment.
- Proportional gain register is: 0x4157C[24:20].
Note: This is the offset address for lane 0.
- Integral gain registers are: 0x4158C[17:13], 0x41484[14:10], 0x41484[24:20], 0x41488[4:0], 0x41488[14:10], 0x41488[24:20], 0x4148C[4:0], 0x4148C[14:10]
Note: These are the offset addresses for lane 0.
- An example of optimal settings for the CDR gain registers are:
- Proportional gain value: 0xA
- Integral gain value: 0xC
- Set the Enable fgt_rx_cdr_set_locktoref port parameter to On.
- Set the CDR lock mode parameter to auto.
- Set registers 0x41678[27:26] and 0x41678[29:28] to 2'b11, otherwise, the LTR/LTD switching may fail.
- Set registers 0x41580[30] and 0x41580[31] to 1'b1, otherwise, during LTR mode, the rx_parallel_data may be invalid.