Visible to Intel only — GUID: eez1484175385082
Ixiasoft
Visible to Intel only — GUID: eez1484175385082
Ixiasoft
3.11.4. Mix and Match Example
In the Intel® Stratix® 10 transceiver architecture, the separate Native PHY IP core and the PLL IP core scheme allows great flexibility. It is easy to share PLLs and reconfigure data rates. The following design example illustrates PLL sharing and both bonded and non-bonded clocking configurations.
PLL Instances
In this example, two ATX PLL instances and three fPLL instances are used. Choose an appropriate reference clock for each PLL instance. The IP Catalog lists the available PLLs.
Use the following data rates and configuration settings for PLL IP cores:
- Transceiver PLL Instance 0: ATX PLL with output clock frequency of 6.25 GHz
- Enable the Master CGB and bonding output clocks.
- Transceiver PLL instance 1: fPLL with output clock frequency of 5.1625 GHz
- Select the Use as Transceiver PLL option.
- Transceiver PLL instance 2: fPLL with output clock frequency of 0.625 GHz
- Transceiver PLL instance 3: fPLL with output clock frequency of 2.5 GHz
- Select Enable PCIe clock output port option.
- Select Use as Transceiver PLL option.
- Set Protocol Mode to PCIe Gen2.
- Transceiver PLL instance 4: ATX PLL with output clock frequency of 4 GHz
- Enable Master CGB and bonding output clocks.
- Select Enable PCIe clock switch interface option.
- Set Number of Auxiliary MCGB Clock Input ports to 1.
Native PHY IP Core Instances
In this example, three Transceiver Native PHY IP core instances and two 10GBASE-KR PHY IP instances are used. Use the following data rates and configuration settings for the PHY IPs:
- 12.5 Gbps Interlaken with a bonded group of 10 channels
- Select the Interlaken 10x12.5 Gbps preset from the Intel® Stratix® 10 Transceiver Native PHY IP core GUI.
- 1.25 Gbps Gigabit Ethernet with a non-bonded group of four channels
- Select the GIGE-1.25Gbps preset from the Intel® Stratix® 10 Transceiver Native PHY IP core GUI.
- Change the Number of data channels to 2.
- PCIe Gen3 with a bonded group of 8 channels
- Select the PCIe PIPE Gen3x8 preset from the Intel® Stratix® 10 Transceiver Native PHY IP core GUI.
- Under TX Bonding options, set the PCS TX channel bonding master to channel 5.
Note: The PCS TX channel bonding master must be physically placed in channel 1 or channel 4 within a transceiver bank. In this example, the 5th channel of the bonded group is physically placed at channel 1 in the transceiver bank.
- Refer to PCI Express (PIPE) for more details.
- 10.3125 Gbps 10GBASE-KR non-bonded group of 2 channels
- Instantiate the Intel® Stratix® 10 1G/10GbE and 10GBASE-KR PHY IP two times, with one instance for each channel.
- Refer to 10GBASE-KR PHY IP Core for more details.
Connection Guidelines for PLL and Clock Networks
- For 12.5 Gbps Interlaken with a bonded group of 10 channels, connect the tx_bonding_clocks to the transceiver PLL's tx_bonding_clocks output port. Make this connection for all 10 bonded channels. This connection uses a master CGB and the x6 / x24 clock line to reach all the channels in the bonded group.
- Connect the tx_serial_clk port of the two instances of the 10GBASE-KR PHY IP to the tx_serial_clk port of PLL instance 1 (fPLL at 5.1625 GHz). This connection uses the x1 clock line within the transceiver bank.
- Connect the 1.25 Gbps Gigabit Ethernet non-bonded PHY IP instance to the tx_serial_clk port of the PLL instance 2. Make this connection twice, one for each channel. This connection uses the x1 clock line within the transceiver bank.
- Connect the PCIe Gen3 bonded group of 8 channels as follows:
- Connect the tx_bonding_clocks of the PHY IP to the tx_bonding_clocks port of the Transceiver PLL Instance 4. Make this connection for each of the 8 bonded channels.
- Connect the pipe_sw_done of the PHY IP to the pipe_sw port of the transceiver PLL instance 4.
- Connect the pll_pcie_clk port of the PLL instance 3 to the PHY IP's pipe_hclk_in port.
- Connect tx_serial_clk port of the PLL instance 3 to the mcgb_aux_clk0 port of the PLL instance 4. This connection is required as a part of the PCIe speed negotiation protocol.