Skip To Main Content
Support Knowledge Base

Teaming with Intel® Advanced Network Services

Content Type: Install & Setup   |   Article ID: 000005667   |   Last Reviewed: 05/28/2024

Adapter teaming with Intel® Advanced Network Services uses an intermediate driver to group multiple physical ports. You can use teaming to add fault tolerance, load balancing, and link aggregation features to a group of ports.

Note
  • Intel® Advanced Network Services isn't supported on Windows 11*. Microsoft Windows® 10 is the last Windows* Client operating system version that supports Intel® Advanced Network Services.
  • Intel® Advanced Network Services isn't supported on Windows Server 2016*/2019*. Microsoft Windows Server 2012 R2* is the last Windows* Server operating system version that supports Intel® Advanced Network Services.
  • If you're running Windows® 10 Creators Update (RS2) and are using Intel LAN software release v22.1 or v22.2, then Advanced Network Services won't function until you update to the Intel® LAN software v22.3 or newer.
  • If you're running Windows® 10 Version 1809 (RS5), you need to install Intel LAN software v23.5 or newer.

Click on the topic for details:

Download Intel® Advanced Network Services teaming software

Drivers and software for Intel® Ethernet Adapters list driver downloads for Intel® Ethernet Adapters that include Intel® Advanced Network Services teaming software. A separate download for teaming software isn't required or available.

The Drivers and Software for Intel® Ethernet Adapters include drivers for the following operating systems:

  • Windows® 10 and Server 2016 / 2019
  • Windows 8.1* and Server 2012 R2* drivers
  • Windows 8* and Server 2012 drivers
  • Windows 7* and Server 2008* R2 drivers
  • Older Windows and Linux* drivers, utilities, and other downloads
Install Intel® Advanced Network Services teaming software

Intel® Advanced Network Services is installed by default along with Intel® PROSet for Windows* Device Manager. See the image below.

Install Intel® ANS teaming software

When you run the installation from the CD included with your software or from downloaded software, the Setup Option gives you the choice to install Intel® Advanced Network Services. The default selection is Intel® Advanced Network Services, so no special action is required during installation.

If you uncheck Intel® Advanced Network Services during installation, you need to modify the installation and select Intel® Advanced Network Services as an installation option.

If your adapter supports teaming, then a Teaming tab displays in Windows Device Manager after installing the software. Use the New Team option and follow the wizard to create a team.

gigabit ct desktop adapter properties

Supported adapters

Teaming options are supported for Windows* versions where the following Intel adapters receive full software support:

When creating a team on a supported adapter, ports on non-supported adapters can display in the Intel® PROSet teaming wizard. Any port that displays in the Intel PROSet teaming wizard can be included in a team including older Intel adapters and non-Intel adapters.

Intel® PRO/100 Adapters and PRO/1000 adapters that plug into PCI or PCI-X* slots don't support Intel® Advanced Network Services teaming in any Windows* versions later than Windows Vista* and Windows Server 2008.

Some advanced features, including hardware offloading, are automatically disabled when non-Intel adapters are team members to ensure a common feature set.

TCP Offload Engine (TOE) enabled devices can't be added to a team and don't display in the list of available adapters.

Microsoft Windows Server 2012* NIC teaming

Windows Server 2012 adds support for NIC teaming, also known as Load Balancing and Failover (LBFO). Intel® Advanced Network Services teaming and VLANs aren't compatible with Microsoft LBFO teams. Intel® PROSet blocks a member of an LBFO team from being added to an Intel Intel® Advanced Network Services team or VLAN. Don't add a port that is part of an Intel® Advanced Network Services team or VLAN to an LBFO team. Adding a port can cause system instability.

Instructions for setting up Microsoft Teaming in Server 2012 and later

If you use an Intel® Advanced Network Services team member or VLAN in an LBFO team, use the following procedure to restore your configuration:

  1. Reboot the server.
  2. Remove LBFO team. Even though LBFO team creation failed, after a reboot the Server Manager reports that LBFO is enabled. The LBFO interface is present in the NIC Teaming GUI.
  3. Remove the Intel® Advanced Network Services teams and VLANs involved in the LBFO team and recreate them. This step is optional but strongly recommended. All bindings are restored when the LBFO team is removed.
Note If you add an Intel® Active Management Technology (Intel® AMT) enabled port to an LBFO team, don't set the port to Standby in the LBFO team. If you set the port to standby, you can lose Intel AMT functionality.
Teaming features

Teaming features include failover protection, increased bandwidth throughput aggregation, and balancing of traffic among team members. Teaming Modes are AFT, SFT, ALB, Receive Load Balancing (RLB), SLA, and IEEE 802.3ad Dynamic Link Aggregation.

Features available using Intel® Advanced Network Services include:

  • Fault Tolerance
    If the primary adapter, it's cabling, or the link partner fails, Intel® Advanced Network Services uses one or more secondary adapters to take over. Designed to guarantee server availability to the network.
  • Link Aggregation
    Combines multiple adapters into a single channel to provide greater bandwidth. Bandwidth increase is only available when connecting to multiple destination addresses. ALB mode provides aggregation for transmission only while RLB, SLA, and IEEE 802.3ad dynamic link aggregation modes provide aggregation in both directions. Link aggregation modes require switch support, while ALB and RLB modes can be used with any switch.
  • Load Balancing
    The distribution of the transmission and reception load among aggregated network adapters. An intelligent adaptive agent in the Intel® Advanced Network Services driver repeatedly analyzes the traffic flow from the server and distributes the packets based on destination addresses. In IEEE 802.3ad modes, the switch provides load balancing on incoming packets.
    Note Load Balancing in ALB mode can only occur on Layer 3 routed protocols (IP and NCP IPX). Load Balancing in RLB mode can only occur for TCP/IP. Non-routed protocols are transmitted only over the primary adapter.
Teaming modes
  • Adapter Fault Tolerance (AFT)
    Allows mixed models and mixed connection speeds as long as there is at least one Intel® PRO server adapter in the team. A 'failed' primary adapter passes its MAC and Layer 3 address to the failover (secondary) adapter. All adapters in the team should be connected to the same hub or switch with Spanning Tree Protocol (STP) set to Off.
  • Switch Fault Tolerance (SFT)
    Uses two adapters connected to two switches. It provides a fault-tolerant network connection if the first adapter, its cabling, or the switch fails. Only two adapters can be assigned to an SFT team.
    Note
    • Don't put clients on the SFT team link partner switches. They don't pass traffic to the partner switch at fail.
    • Spanning Tree Protocol (STP) must be running on the network to make sure that loops are eliminated.
    • Turn off STP on the incoming ports of the switches directly connected to the adapters in the team, or configure ports for PortFast.
    • Only 802.3ad DYNAMIC mode allows failover between teams.

    Diagram of Switch Fault Tolerance (SFT) Team with Spanning Tree Protocol

    Diagram of Switch Fault Tolerance (SFT) Team with Spanning Tree Protocol (STP)

  • Adaptive Load Balancing (ALB)
    Offers increased network bandwidth by allowing transmission over two to eight ports to multiple destination addresses. It incorporates AFT. Only the primary adapter receives incoming traffic. Broadcasts/multicasts and non-routed protocols are only transmitted via the primary adapter in the team. The Intel® Advanced Network Services software load balances transmissions based on Destination Address and can be used with any switch. Simultaneous transmission only occurs at multiple addresses. This mode can be connected to any switch.

  • Receive Load Balancing (RLB)
    • Offers increased network bandwidth by allowing reception over two to eight ports from multiple addresses.
    • Can only be used in conjunction with ALB.
    • RLB is enabled by default when an ALB team is configured unless you're using Microsoft Hyper-V*.
    • RLB mode isn't compatible with Microsoft Hyper-V*. Use Virtual Machine Load Balancing mode if you want to balance both transmit and receive traffic.
    • Only the adapters connected at the fastest speed are used to load balance incoming TCP/IP traffic. Regardless of speed, the primary adapter receives all other RX traffic.
    • Can be used with any switch. Any failover increases network latency until ARPs are resent. Simultaneous reception only occurs from multiple clients.
  • Virtual Machine Load Balancing (VMLB)

    VMLB teaming mode was created specifically for use with Microsoft Hyper-V*. VMLB provides transmit and receive traffic load balancing across Virtual Machines bound to the team interface. The LMLB team also provides fault tolerance in the event of a switch port, cable, or adapter failure. This teaming type works with any switch.

    The driver analyzes the transmit and receive load on each member adapter and balances the traffic across member ports. In a VMLB team, each Virtual Machine is associated with one team member port for its TX and RX traffic.

    For example: If you have three virtual machines and two member ports, and if VM1 has twice as much traffic as the combination of VM2 and VM3, then VM1 is assigned to team member port 1, and VM2 and VM3 share team member port 2.

    If only one virtual NIC is bound to the team, or if Hyper-V is removed, then the VMLB team acts like an AFT team.

    Note
    • VMLB doesn't load balance non-routed protocols such as NetBEUI and some IPX* traffic.
    • VMLB supports from two to eight ports per team.
    • You can create a VMLB team with mixed-speed adapters. The load is balanced according to the lowest common denominator of adapter capabilities, and the bandwidth of the channel.
  • Link Aggregation (LA), Cisco* Fast EtherChannel (FEC), and Gigabit EtherChannel (GEC)

    • Modes replaced by Static Link Aggregation mode.
    • See IEEE 802.3ad Static Link Aggregation mode below.
  • IEEE 802.3ad
    This standard has been implemented in two ways:

    • Static Link Aggregation (SLA):
      • Equivalent to EtherChannel or Link Aggregation.
      • Must be used with an 802.3ad, FEC/GEC, or Link Aggregation capable switch.
    • DYNAMIC mode
      • Requires 802.3ad DYNAMIC capable switches.
      • Active aggregators in software determine team membership between the switch and the Intel® Advanced Network Services software (or between switches).
      • There is a maximum of two aggregators per server and you must choose either maximum bandwidth or maximum adapters.
    • Both 802.3ad modes include adapter fault tolerance and load balancing capabilities. However, in DYNAMIC mode load balancing is within only one team at a time.

    • IEEE 802.3ad Link Aggregation (LAG): what it is, and what it is not presents more information.

Available features and modes:

Features Modes Dynamic 802.3ad
AFT ALB RLB SLA
Fault Tolerance X X X X X
Link Aggregation   X X X X
Load Balancing   Tx Tx/Rx Tx/Rx Tx/Rx
Layer 3 Address Aggregation   X IP only X X
Layer 2 Address Aggregation       X X
Mixed Speed Adapters* X X X   X
  • You can mix different adapter types for any mode. But you must run all the adapters in the team at the same speed when using Link Aggregation mode. Mixed Speed connections are possible in AFT, ALB, RLB, SFT, and 802.3ad modes.
  • Multi-vendor teaming (MVT) is applicable to all modes in Microsoft Windows.
Settings (roles)

For AFT, SFT, ALB, and RLB modes you can choose a primary and secondary role for selected adapters.

  • The primary adapter is the adapter that carries the most traffic.
    • With AFT and SFT, it's the only adapter used until that link fails.
    • With ALB and non-routable protocols (anything other than IP or Novell IPX), it's the only adapter used. It's also the only adapter used for broadcast and multicast traffic.
    • With RLB, all traffic other than IP traffic is passed on the primary adapter, regardless of its speed.
    • If you set the primary adapter instead of the software, and it's active at fail time, it allows failback to the chosen primary adapter.
      Note When a primary is removed from a team, its MAC address stays with the team until the server is rebooted. Don't add the primary back on that network until the server it was removed from is rebooted.
  • The secondary adapter becomes the primary (if possible) at failure of the primary, or its cable or link partner.
  • Multi-vendor teaming (MVT) requires the Intel adapter to be set as the primary adapter of the team.
Test switch configuration

A utility in Intel® PROSet for Windows* Device Manager on the Advanced Settings Team page allows Intel® Intel® Advanced Network Services software to query the switch partner for configuration settings. If the switch is configured differently than necessary for the team mode, a troubleshooting page lists possible corrective actions. When you run this test, the team temporarily loses network connectivity.

See Intel PROSet for Windows Device Manager Help for limitations.

Implementation considerations, including throughput and resource issues
  • You can use static IP addresses for most servers including a server with a team of NICs. You can also use DHCP for Server 2012 configuration. If you have no DHCP, you need to manually assign IP addresses to your teams when they're created.
  • For all team types except SFT, disable Spanning Tree Protocol (STP) on switch ports connected to teamed adapters. Disabling STP prevents data loss when the primary adapter is returned to service (failback). You can also configure an activation delay on the adapters to prevent data loss when STP is used. Set the activation delay on the advanced tab of team properties.
  • Not all team types are available on all operating systems and all adapters.
  • Hot Plug operations with non-Intel adapters that are part of a team cause system instability. Restart the system or reload the team after doing Hot Plug operations with a team that includes a non-Intel adapter.
  • You can add Intel® Active Management Technology (Intel® AMT) enabled devices to Adapter Fault Tolerance (AFT), Switch Fault Tolerance (SFT), and Adaptive Load Balancing (ALB) teams. All other team types aren't supported. The Intel® AMT enabled device must be designated as the primary adapter for the team.
  • Teams display as Virtual Adapters in Windows. Use the remove team option in Intel PROSet for Windows* Device Manager to disable or remove virtual adapters. Deleting virtual teams directly in Windows Device Manager can cause severe consequences.
  • To avoid repeated unnecessary failovers, disable Spanning Tree Protocol (STP) for all modes except SFT.
  • Some OS require reboots with any system configuration change.
  • Configure team member features similarly or failover and team functionality are affected with possible severe consequences.
  • Virtual Adapters require memory resources beyond the physical adapters. The physical adapter buffers or descriptors might need to be increased when set into a team. If the system is used more heavily, consider tuning both the base adapter and the virtual adapter settings for either RX or TX traffic.
  • You can have up to eight ports in a team. You can mix onboard and PCIe ports, but environmental factors, such as OS, CPUs, RAM, bus, or switch capabilities, can limit the benefits of more adapters and determine your total throughput capability. SFT can only use two adapters.
  • For Link Aggregation/FEC/GEC/802.3ad, you must match the switch capabilities for aggregation. Only use 802.3ad DYNAMIC mode with switches capable of DYNAMIC 3ad active aggregation.
  • For AFT and SFT modes, only one adapter is active at a time. With ALB, only one adapter receives while all transmit IP or NetWare1 IPX packets, but never to the same destination address simultaneously.
  • Throughput is always higher to multiple addresses than to a single address, regardless of the number of adapters.
  • In Windows NT* 4.0, there's a timer for each adapter to prevent a non-working teamed adapter from holding up the boot to the desktop. If you are using mixed-speed adapters (Intel® PRO/100 with PRO/1000), and using teaming or a large number of VLANs, you can encounter a load time longer than the timer's limit. If this happens, disable the timer for each adapter in the team in the registry under the DWORD BindTimerTimeout:
<e100XbN>\parameters\iansprotocol\BindTimerTimeout
set value to 0
where N = the card instance

You must repeat the steps whenever Intel PROSet for Windows Device Manager changes.

  • To ensure optimal functionality when using Multi-vendor Teaming (MVT), make sure drivers are up to date on your Intel adapter and Non-Intel adapter. Contact the manufacturer of the non-Intel adapter and see if they support teaming with other manufacturers.
  • Jumbo Frames, Offloading Engines, and QoS priority tagging aren't supported in Multi-vendor Teaming (MVT).
  • VLAN over Multi-vendor Teaming (MVT) isn't supported on Windows* XP/2003.
Linux* Ethernet Bonding (channel bonding or teaming) in Linux*

Find channel bonding documentation can be found in the Linux* kernel source, Linux Ethernet Bonding Driver HOWTO.

 

Related topics
Intel® Advanced Network Services Advanced Settings for Teams
Setting up NIC Teaming for Windows Server 2012*/2012 R2*/2016*/2019*
How-to Video: Create Teams in Windows® 10 using PowerShell*
Adapter Testing Not Supported on Systems Carrying iSCSI Traffic
IEEE 802.3AD Switch Configurations
Windows Server 2012* NIC Teaming in Microsoft TechNet Library

Related Products

This article applies to 174 products.
Intel® Ethernet Controller X540 Intel® Ethernet Controller X540-AT2 Intel® Ethernet Controller X550 Intel® Ethernet Controller X550-AT2 Intel® Ethernet Controller X550-BT2 Intel® Ethernet Controller X550-AT Intel® Ethernet Converged Network Adapter X520 Series Intel® Ethernet Converged Network Adapter X520-T2 Intel® Ethernet Converged Network Adapter X520-SR1 Intel® Ethernet Converged Network Adapter X520-SR2 Intel® Ethernet Converged Network Adapter X520-LR1 Intel® Ethernet Converged Network Adapter X520-DA2 Intel® Ethernet Converged Network Adapter X520-QDA1 Intel® Ethernet Converged Network Adapter X540 Intel® Ethernet Converged Network Adapter X540-T1 Intel® Ethernet Converged Network Adapter X540-T2 Intel® Ethernet Converged Network Adapter X550 Intel® Ethernet Converged Network Adapter X550-T1 Intel® Ethernet Converged Network Adapter X550-T2 Intel® Ethernet Server Adapter X520 Intel® Ethernet Server Adapter X520-DA2 for Open Compute Project Intel® Ethernet Server Adapter X520-DA1 for Open Compute Project Intel® Ethernet Server Adapter X520-DA2 Intel® Gigabit VT Quad Port Server Adapter Series Intel® Gigabit ET Server Adapter Series Intel® Gigabit ET2 Quad Port Server Adapter Intel® Gigabit ET Dual Port Server Adapter Intel® Gigabit ET Quad Port Server Adapter Intel® Gigabit EF Server Adapter Series Intel® Gigabit EF Dual Port Server Adapter Intel® Ethernet Server Adapter I350 Series Intel® Ethernet Server Adapter I350-T2 Intel® Ethernet Server Adapter I350-F4 Intel® Ethernet Server Adapter I350-T4 Intel® Ethernet Server Adapter I350-F2 Intel® Ethernet Server Adapter I340 Series Intel® Ethernet Server Adapter I340-T4 Intel® Ethernet Server Adapter I340-F4 Intel® Gigabit CT Desktop Adapter Series Intel® Gigabit CT Desktop Adapter Intel® Ethernet Server Adapter I210 Series Intel® Ethernet Server Adapter I210-T1 Intel® Ethernet Network Adapter I350 Series Intel® Ethernet Network Adapter I350-T4 for OCP 3.0 Intel® 82579 Gigabit Ethernet Controller Intel® 82579V Gigabit Ethernet PHY Intel® 82579LM Gigabit Ethernet PHY Intel® Ethernet Controller I226 Series Intel® Ethernet Controller I225 Series Intel® Ethernet Connection I219 Series Intel® 82577 Gigabit Ethernet PHY Intel® 82577LM Gigabit Ethernet PHY Intel® 82577LC Gigabit Ethernet PHY Intel® 82578 Gigabit Ethernet PHY Intel® 82578DM Gigabit Ethernet PHY Intel® 82578DC Gigabit Ethernet PHY Intel® 82567 Gigabit Ethernet Controller Intel® PRO/10GbE SR Server Adapter Series Intel® PRO/10GbE LR Server Adapter Series Intel® PRO/1000 XF Server Adapter Series Intel® PRO/1000 T Server Adapter Series Intel® PRO/100B Adapter Series Intel® PRO/1000 XT Server Adapter Series Intel® PRO/1000 PT Desktop Adapter Series Intel® PRO/1000 T Desktop Adapter Series Intel® PRO/1000 PT Server Adapter Series Intel® PRO/1000 PT Server Adapter Intel® PRO/1000 PT Dual Port Server Adapter Intel® PRO/1000 PT Quad Port Server Adapter Intel® 10 Gigabit AF Server Adapter Series Intel® 10 Gigabit AF DA Dual Port Server Adapter Intel® 10 Gigabit CX4 Server Adapter Series Intel® 10 Gigabit CX4 Dual Port Server Adapter Intel® 10 Gigabit AT Server Adapter Series Intel® 10 Gigabit AT2 Server Adapter Intel® 10 Gigabit XF Server Adapter Series Intel® 10 Gigabit XF SR Dual Port Server Adapter Intel® 10 Gigabit XF SR Server Adapter Intel® 10 Gigabit XF LR Server Adapter Intel® PRO/100+ Management Adapter Series Intel® PRO/100 VE Adapter Series Intel® PRO/100 VE Network Connection Intel® PRO/100 Smart Adapter Series Intel® PRO/100 VM Adapter Series Intel® PRO/100 VM Network Connection Intel® PRO/100 S Management Adapter Series Intel® PRO/100 S Desktop Adapter Series Intel® PRO/100 M Desktop Adapter Series Intel® PRO/100 ISA Adapter Series Intel® PRO/1000 PM Adapter Series Intel® PRO/1000 PM Network Connection Intel® PRO/1000 MT Desktop Adapter Series Intel® PRO/1000 MF Server Adapter Series Intel® PRO/1000 PF Server Adapter Series Intel® PRO/1000 PF Quad Port Server Adapter Intel® PRO/1000 PF Dual Port Server Adapter Intel® PRO/1000 PF Server Adapter Intel® PRO/1000 MT Server Adapter Series Intel® PRO/1000 MT Server Adapter Intel® PRO/1000 Gigabit Server Adapter Series Intel® PRO/1000 F Server Adapter Series Intel® PRO/1000 GT Server Adapter Series Intel® PRO/1000 GT Quad Port Server Adapter Intel® PRO/1000 GT Desktop Adapter Series Intel® PRO/1000 GT Desktop Adapter Intel® PRO/1000 CT Adapter Series Intel® PRO/1000 CT Network Connection Xircom® Networking & Modem Adapters Intel® 82566 Gigabit Ethernet PHY Intel® 82566MC Gigabit Ethernet PHY Intel® 82566MM Gigabit Ethernet PHY Intel® 82566DC Gigabit Ethernet PHY Intel® 82566DM Gigabit Ethernet PHY NetEffect™ Ethernet Adapters NetEffect™ Ethernet Server Cluster Adapter SFP+ SR2 NetEffect™ Ethernet Server Cluster Adapter SFP+ SR NetEffect™ Ethernet Server Cluster Adapter DA2 NetEffect™ Ethernet Server Cluster Adapter DA NetEffect™ Ethernet Server Cluster Adapter CX4 NetEffect™ Ethernet Mezzanine Cluster Adapter -I NetEffect™ Ethernet Mezzanine Cluster Adapter -H Legacy Intel® Killer™ Ethernet Products

Discontinued Products

Intel® Gigabit VT Quad Port Server Adapter Intel® PRO/10GbE SR Server Adapter Intel® PRO/10GbE LR Server Adapter Intel® PRO/1000 XF Server Adapter Intel® PRO/1000 T Server Adapter Intel® PRO/100B Adapter Intel® PRO/1000 XT Low Profile Server Adapter Intel® PRO/1000 XT Server Adapter Intel® PRO/1000 PT Desktop Adapter Intel® PRO/1000 T Desktop Adapter Intel® PRO/1000 PT Quad Port Low Profile Server Adapter Intel® PRO/10GbE CX4 Server Adapter Intel® 10 Gigabit AT Server Adapter Intel® PRO/100+ Server Adapter Series Intel® PRO/100+ Server Adapter Intel® PRO/100+ Management Adapter Intel® PRO/100+ Adapter Intel® PRO/100 VE Desktop Adapter Intel® PRO/100 Smart Adapter Intel® PRO/100+ Dual Port Server Adapter Series Intel® PRO/100+ Dual Port Server Adapter Intel® PRO/100 S Management Adapter Intel® PRO/100 S Desktop Adapter Intel® PRO/100 S Server Adapter Series Intel® PRO/100 S Server Adapter Intel® PRO/100 S Dual Port Server Adapter Intel® PRO/100 Mobile Adapter Series Intel® PRO/100 LAN+Modem56 CardBus Adapter Intel® PRO/100 LAN+Modem56 PC Card Adapter Intel® PRO/100 PC Card Adapter Intel® PRO/100 CardBus Adapter Intel® PRO/100 CardBus II Adapter Intel® PRO/100 LAN+Modem56 CardBus II Adapter Intel® PRO/100 S Combo Mobile Adapter Intel® PRO/100 S Mobile Adapter Intel® PRO/100 SR Combo Mobile Adapter Intel® PRO/100 SR Mobile Adapter Intel® PRO/100 M Desktop Adapter Intel® PRO/100 ISA Adapter Intel® PRO/100 Intelligent Server Adapter Series Intel® PRO/100 Intelligent Server Adapter Intel® NetportExpress Print Servers Intel® NetportExpress™ PRO and PRO/100 Print Servers Intel® NetportExpress™ 10 and 10/100 Print Servers Intel® PRO/1000 MT Desktop Adapter Intel® PRO/1000 MF Dual Port Server Adapter Intel® PRO/1000 MF Server Adapter Intel® PRO/1000 MF Server Adapter (LX) Intel® PRO/1000 MT Dual Port Server Adapter Intel® PRO/1000 MT Quad Port Server Adapter Intel® PRO/1000 Gigabit Server Adapter Intel® PRO/1000 F Server Adapter