Introduction
This tutorial shows how to install the FD.io Vector Packet Processing (VPP) package and build a packet forwarding engine on a bare metal Intel® Xeon® processor server. Two additional Intel Xeon processor platform systems are used to connect to the VPP host to pass traffic using iperf3*
and Cisco’s TRex* Realistic Traffic Generator (TRex*). Intel 40 Gigabit Ethernet (GbE) network interface cards (NICs) are used to connect the hosts.
Vector Packet Processing (VPP) Overview
VPP is open source high-performance packet processing software. It leverages the Data Plane Development Kit (DPDK) to take advantage of fast I/O. DPDK provides fast packet processing libraries and user space drivers. It receives and send packets with a minimum number of CPU cycles by bypassing the kernel and using a user poll mode driver. Details on how to configure DPDK can be found in the DPDK Documentation.
VPP can be used as a standalone product or as an extended data plane product. It is highly efficient because it scales well on modern Intel® processors and handles packet processing in batches, called vectors, up to 256 packets at a time. This approach ensures that cache hits will be maximized.
The VPP platform consists of a set of nodes in a directed graph called a packet processing graph. Each node provides a specific network function to packets, and each directed edge indicates the next network function that will handle packets. Instead of processing one packet at a time as the kernel does, the first node in the packet processing graph polls for a burst of incoming packets from a network interface; it collects similar packets into a frame (or vector), and passes the frame to the next node indicated by the directed edge. The next node takes the frame of packets, processes them based on the functionality it provides, passes the frame to the next node, and so on. This process repeats until the last node gets the frame, processes all the packets in the frame based on the functionality it provides, and outputs them on a network interface. When a frame of packets is handled by a node only the first packet in the frame needs to load the CPU’s instructions to the cache; the rest of the packets benefit from the instruction already in the cache. VPP architecture is flexible to allow users to create new nodes, enter them into the packet processing graph, and rearrange the graph.
Like DPDK, VPP operates in user space. VPP can be used on bare metal, virtual machines (VMs), or containers.
Build and Install VPP
In this tutorial, three systems named csp2s22c03
, csp2s22c04
, and net2s22c05
are used. The system csp2s22c03
, with VPP installed, is used to forward packets, and the systems csp2s22c04
and net2s22c05
are used to pass traffic. All three systems are equipped with Intel® Xeon® processor E5-2699 v4 @ 2.20 GHz, two sockets with 22 cores per socket, and are running 64-bit Ubuntu* 16.04 LTS. The Intel® Ethernet Converged Network Adapter XL710 10/40 GbE is used to connect these systems. Refer to Figure 1 and Figure 2 for configuration diagrams.
Build the FD.io VPP Binary
The instructions in this section describe how to build the VPP package from FD.io. Skip to the next section if you’d like to use the Debian* VPP packages instead.
With an admin privileges account in csp2s22c03
, we download a stable version of VPP (version 17.04 is used in this tutorial), and navigate to the build-root
directory to build the image:
csp2s22c03$ git clone -b stable/1704 https://gerrit.fd.io/r/vpp fdio.1704
csp2s22c03$ cd fdio.1704/
csp2s22c03$ make install-dep
csp2s22c03$ make bootstrap
csp2s22c03$ cd build-root
csp2s22c03$ source ./path_setup
csp2s22c03$ make PLATFORM=vpp TAG=vpp vpp-install
To build the image with debug symbols:
csp2s22c03$ make PLATFORM=vpp TAG=vpp_debug vpp-install
After you've configured VPP, you can run the VPP binary from the fdio.1704
directory using the src/vpp/conf/startup.conf
configuration file:
csp2s22c03$ cd ..
csp2s22c03$ sudo build-root/build-vpp-native/vpp/bin/vpp -c src/vpp/conf/startup.conf
Build the Debian* VPP Packages
If you prefer to use the Debian VPP packages, follow these instructions to build them:
csp2s22c03$ make PLATFORM=vpp TAG=vpp install-deb
csp2s22c03:~/download/fdio.1704/build-root$ ls -l *.deb
-rw-r--r-- 1 plse plse 1667422 Feb 12 16:41 vpp_17.04.2-2~ga8f93f8_amd64.deb
-rw-r--r-- 1 plse plse 2329572 Feb 12 16:41 vpp-api-java_17.04.2-2~ga8f93f8_amd64.deb
-rw-r--r-- 1 plse plse 23374 Feb 12 16:41 vpp-api-lua_17.04.2-2~ga8f93f8_amd64.deb
-rw-r--r-- 1 plse plse 8262 Feb 12 16:41 vpp-api-python_17.04.2-2~ga8f93f8_amd64.deb
-rw-r--r-- 1 plse plse 44175468 Feb 12 16:41 vpp-dbg_17.04.2-2~ga8f93f8_amd64.deb
-rw-r--r-- 1 plse plse 433788 Feb 12 16:41 vpp-dev_17.04.2-2~ga8f93f8_amd64.deb
-rw-r--r-- 1 plse plse 1573956 Feb 12 16:41 vpp-lib_17.04.2-2~ga8f93f8_amd64.deb
-rw-r--r-- 1 plse plse 1359024 Feb 12 16:41 vpp-plugins_17.04.2-2~ga8f93f8_amd64.deb
In this output:
- vpp is the packet engine
- vpp-api-java is the Java* binding module
- vpp-api-lua is the Lua* binding module
- vpp-api-python is the Python* binding module
- vpp-dbg is the debug symbol version of VPP
- vpp-dev is the development support (headers and libraries)
- vpp-lib is the VPP runtime library
- vpp-plugins is the plugin module
Next, install the Debian VPP packages. At a minimum, you should install the VPP, vpp-lib, and vpp-plugins packages). We install them on the machine csp2s22c03
:
csp2s22c03$ apt list --installed | grep vpp
csp2s22c03$ sudo dpkg -i vpp_17.04.2-2~ga8f93f8_amd64.deb vpp-lib_17.04.2-2~ga8f93f8_amd64.deb vpp-plugins_17.04.2-2~ga8f93f8_amd64.deb
Verify that the VPP packages are installed successfully:
csp2s22c03$ apt list --installed | grep vpp
vpp/now 17.04.2-2~ga8f93f8 amd64 [installed,upgradable to: 18.01.1-release]
vpp-lib/now 17.04.2-2~ga8f93f8 amd64 [installed,upgradable to: 18.01.1-release]
vpp-plugins/now 17.04.2-2~ga8f93f8 amd64 [installed,upgradable to: 18.01.1-release]
Configure VPP
During installation, two configuration files are created: /etc/sysctl.d/80-vpp.conf
and /etc/vpp/startup.conf/startup.conf
. The /etc/sysctl.d/80-vpp.conf
configuration file is used to set up huge pages. The /etc/vpp/startup.conf/startup.conf
configuration file is used to start VPP.
Configure huge pages
In the /etc/sysctl.d/80-vpp.conf
configuration file, set parameters as follows: the number of 2 MB huge pages vm.nr_hugepages
is chosen to be 4096, and vm.max_map_count
is 9216 (2.5 * 4096), shared memory max kernel.shmmax
is 8,589,934,592 (4096 * 2 * 1024 * 1024).
csp2s22c03$ cat /etc/sysctl.d/80-vpp.conf
# Number of 2MB hugepages desired
vm.nr_hugepages=4096
# Must be greater than or equal to (2 * vm.nr_hugepages).
vm.max_map_count=9216
# All groups allowed to access hugepages
vm.hugetlb_shm_group=0
# Shared Memory Max must be greator or equal to the total size of hugepages.
# For 2MB pages, TotalHugepageSize = vm.nr_hugepages * 2 * 1024 * 1024
# If the existing kernel.shmmax setting (cat /sys/proc/kernel/shmmax)
# is greater than the calculated TotalHugepageSize then set this parameter
# to current shmmax value.
kernel.shmmax=8589934592
Apply these memory settings to the system and verify the huge pages:
csp2s22c03$ sudo sysctl -p /etc/sysctl.d/80-vpp.conf
vm.nr_hugepages = 4096
vm.max_map_count = 9216
vm.hugetlb_shm_group = 0
kernel.shmmax = 8589934592
csp2s22c03$ cat /proc/meminfo
MemTotal: 131912940 kB
MemFree: 116871136 kB
MemAvailable: 121101956 kB
...............................
HugePages_Total: 4096
HugePages_Free: 3840
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Configure startup.conf
In the /etc/vpp/startup.conf/startup.conf
configuration file, the keyword interactive is added to enable the VPP Command-Line Interface (CLI). Also, four worker threads are selected and run on cores 2, 3, 22, and 23. Note that you can choose the NIC cards to use in this configuration or you can specify them later, as this exercise shows. The modified /etc/vpp/startup.conf/startup.conf
configuration file is shown below.
csp2s22c03$ cat /etc/vpp/startup.conf
unix {
nodaemon
log /tmp/vpp.log
full-coredump
interactive
}
api-trace {
on
}
api-segment {
gid vpp
}
cpu {
## In the VPP there is one main thread and optionally the user can create worker(s)
## The main thread and worker thread(s) can be pinned to CPU core(s) manually or automatically
## Manual pinning of thread(s) to CPU core(s)
## Set logical CPU core where main thread runs
main-core 1
## Set logical CPU core(s) where worker threads are running
corelist-workers 2-3,22-23
}
dpdk {
## Change default settings for all intefaces
# dev default {
## Number of receive queues, enables RSS
## Default is 1
# num-rx-queues 3
## Number of transmit queues, Default is equal
## to number of worker threads or 1 if no workers treads
# num-tx-queues 3
## Number of descriptors in transmit and receive rings
## increasing or reducing number can impact performance
## Default is 1024 for both rx and tx
# num-rx-desc 512
# num-tx-desc 512
## VLAN strip offload mode for interface
## Default is off
# vlan-strip-offload on
# }
## Whitelist specific interface by specifying PCI address
# dev 0000:02:00.0
## Whitelist specific interface by specifying PCI address and in
## addition specify custom parameters for this interface
# dev 0000:02:00.1 {
# num-rx-queues 2
# }
## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci
## and uio_pci_generic (default)
# uio-driver vfio-pci
}
# Adjusting the plugin path depending on where the VPP plugins are:
plugins
{
path /usr/lib/vpp_plugins
}
Run VPP as a Packet Processing Engine
In this section, four examples of running VPP are shown. In the first two examples, the iperf3 tool is used to generate traffic, and in the last two examples the TRex Realistic Traffic Generator is used. For comparison purposes, the first example shows packet forwarding using ordinary kernel IP forwarding, and the second example shows packet forwarding using VPP.
Example 1: Using Kernel Packet Forwarding with iperf3*
In this test, 40 GbE Intel Ethernet Network Adapters are used to connect the three systems. Figure 1 illustrates this configuration.
Figure 1 – VPP runs on a host that connects to two other systems via 40 GbE NICs.
For comparison purposes, in the first test, we configure kernel forwarding in csp2s22c03
and use the iperf3
tool to measure network bandwidth between csp2s22c03
and net2s22c05
. In the second test, we start the VPP engine in csp2s22c03
instead of using kernel forwarding.
On csp2s22c03
, we configure the system to have the addresses 10.10.1.1/24
and 10.10.2.1/24
on the two 40-GbE NICs. To find all network interfaces available on the system, use the lshw
Linux* command to list all network interfaces and the corresponding slots [0000:xx:yy.z]
. For example, the 40-GbE interfaces are ens802f0
and ens802f1
.
csp2s22c03$ sudo lshw -class network -businfo
Bus info Device Class Description
========================================================
pci@0000:03:00.0 enp3s0f0 network Ethernet Controller 10-Gigabit X540
pci@0000:03:00.1 enp3s0f1 network Ethernet Controller 10-Gigabit X540
pci@0000:82:00.0 ens802f0 network Ethernet Controller XL710 for 40GbE
pci@0000:82:00.1 ens802f1 network Ethernet Controller XL710 for 40GbE
pci@0000:82:00.0 ens802f0d1 network Ethernet interface
pci@0000:82:00.1 ens802f1d1 network Ethernet interface
Configure the system to have 10.10.1.1
and 10.10.2.1
on the two 40-GbE NICs ens802f0
and ens802f1
, respectively.
csp2s22c03$ sudo ip addr add 10.10.1.1/24 dev ens802f0
csp2s22c03$ sudo ip link set dev ens802f0 up
csp2s22c03$ sudo ip addr add 10.10.2.1/24 dev ens802f1
csp2s22c03$ sudo ip link set dev ens802f1 up
List the route table:
csp2s22c03$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default jf111-ldr1a-530 0.0.0.0 UG 0 0 0 enp3s0f1
default 192.168.0.50 0.0.0.0 UG 100 0 0 enp3s0f0
10.10.1.0 * 255.255.255.0 U 0 0 0 ens802f0
10.10.2.0 * 255.255.255.0 U 0 0 0 ens802f1
10.23.3.0 * 255.255.255.0 U 0 0 0 enp3s0f1
link-local * 255.255.0.0 U 1000 0 0 enp3s0f1
192.168.0.0 * 255.255.255.0 U 100 0 0 enp3s0f0
csp2s22c03$ ip route
default via 10.23.3.1 dev enp3s0f1
default via 192.168.0.50 dev enp3s0f0 proto static metric 100
10.10.1.0/24 dev ens802f0 proto kernel scope link src 10.10.1.1
10.10.2.0/24 dev ens802f1 proto kernel scope link src 10.10.2.1
10.23.3.0/24 dev enp3s0f1 proto kernel scope link src 10.23.3.67
169.254.0.0/16 dev enp3s0f1 scope link metric 1000
192.168.0.0/24 dev enp3s0f0 proto kernel scope link src 192.168.0.142 metric 100
On csp2s22c04
, we configure the system to have the address 10.10.1.2
and use the interface ens802
to route IP packets 10.10.2.0/24
. Use the lshw
Linux command to list all network interfaces and the corresponding slots [0000:xx:yy.z]
. For example, the interface ens802d1 (ens802)
is connected to slot [82:00.0]
:
csp2s22c04$ sudo lshw -class network -businfo
Bus info Device Class Description
=====================================================
pci@0000:03:00.0 enp3s0f0 network Ethernet Controller 10-Gigabit X540-AT2
pci@0000:03:00.1 enp3s0f1 network Ethernet Controller 10-Gigabit X540-AT2
pci@0000:82:00.0 ens802d1 network Ethernet Controller XL710 for 40GbE QSFP+
pci@0000:82:00.0 ens802 network Ethernet interface
For kernel forwarding, set 10.10.1.2
to the interface ens802, and add a static route for IP packet 10.10.2.0/24
:
csp2s22c04$ sudo ip addr add 10.10.1.2/24 dev ens802
csp2s22c04$ sudo ip link set dev ens802 up
csp2s22c04$ sudo ip route add 10.10.2.0/24 via 10.10.1.1
csp2s22c04$ ifconfig
enp3s0f0 Link encap:Ethernet HWaddr a4:bf:01:00:92:73
inet addr:10.23.3.62 Bcast:10.23.3.255 Mask:255.255.255.0
inet6 addr: fe80::a6bf:1ff:fe00:9273/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3411 errors:0 dropped:0 overruns:0 frame:0
TX packets:1179 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:262230 (262.2 KB) TX bytes:139975 (139.9 KB)
ens802 Link encap:Ethernet HWaddr 68:05:ca:2e:76:e0
inet addr:10.10.1.2 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::6a05:caff:fe2e:76e0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:40 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:5480 (5.4 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:31320 errors:0 dropped:0 overruns:0 frame:0
TX packets:31320 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:40301788 (40.3 MB) TX bytes:40301788 (40.3 MB)
After setting the route, we can ping from csp2s22c03
to csp2s22c04
, and vice versa:
csp2s22c03$ ping 10.10.1.2 -c 3
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=64 time=0.122 ms
64 bytes from 10.10.1.2: icmp_seq=2 ttl=64 time=0.109 ms
64 bytes from 10.10.1.2: icmp_seq=3 ttl=64 time=0.120 ms
csp2s22c04$ ping 10.10.1.1 -c 3
PING 10.10.1.1 (10.10.1.1) 56(84) bytes of data.
64 bytes from 10.10.1.1: icmp_seq=1 ttl=64 time=0.158 ms
64 bytes from 10.10.1.1: icmp_seq=2 ttl=64 time=0.096 ms
64 bytes from 10.10.1.1: icmp_seq=3 ttl=64 time=0.102 ms
Similarly, on net2s22c05
, we configure the system to have the address 10.10.2.2
and use the interface ens803f0
to route IP packets 10.10.1.0/24
. Use the lshw
Linux command to list all network interfaces and the corresponding slots [0000:xx:yy.z]
. For example, the interface ens803f0
is connected to slot [87:00.0]
:
NET2S22C05$ sudo lshw -class network -businfo
Bus info Device Class Description
========================================================
pci@0000:03:00.0 enp3s0f0 network Ethernet Controller 10-Gigabit X540-AT2
pci@0000:03:00.1 enp3s0f1 network Ethernet Controller 10-Gigabit X540-AT2
pci@0000:81:00.0 ens787f0 network 82599 10 Gigabit TN Network Connection
pci@0000:81:00.1 ens787f1 network 82599 10 Gigabit TN Network Connection
pci@0000:87:00.0 ens803f0 network Ethernet Controller XL710 for 40GbE QSFP+
pci@0000:87:00.1 ens803f1 network Ethernet Controller XL710 for 40GbE QSFP+
For kernel forwarding, set 10.10.2.2
to the interface ens803f0
, and add a static route for IP packet 10.10.1.0/24
:
NET2S22C05$ sudo ip addr add 10.10.2.2/24 dev ens803f0
NET2S22C05$ sudo ip link set dev ens803f0 up
NET2S22C05$ sudo ip r add 10.10.1.0/24 via 10.10.2.1
After setting the route, you can ping from csp2s22c03
to net2s22c05
, and vice versa. However, in order to ping between net2s22c05
and csp2s22c04
, kernel IP forwarding in csp2s22c03
has to be enabled:
csp2s22c03$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
csp2s22c03$ echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
csp2s22c03$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
If successful, verify that now you can ping between net2s22c05
and csp2s22c04
:
NET2S22C05$ ping 10.10.1.2 -c 3
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=63 time=0.239 ms
64 bytes from 10.10.1.2: icmp_seq=2 ttl=63 time=0.224 ms
64 bytes from 10.10.1.2: icmp_seq=3 ttl=63 time=0.230 ms\
We use the iperf3 utility to measure network bandwidth between hosts. In this test, we download the iperf3 utility tool on both net2s22c05
and csp2s22c04
. On csp2s22c04
, we start the iperf3
server with “iperf3 –s
”, and then on net2s22c05
, we start the iperf3
client to connect to the server:
NET2S22C05$ iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[ 4] local 10.10.2.2 port 54074 connected to 10.10.1.2 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 936 MBytes 7.85 Gbits/sec 2120 447 KBytes
[ 4] 1.00-2.00 sec 952 MBytes 7.99 Gbits/sec 1491 611 KBytes
[ 4] 2.00-3.00 sec 949 MBytes 7.96 Gbits/sec 2309 604 KBytes
[ 4] 3.00-4.00 sec 965 MBytes 8.10 Gbits/sec 1786 571 KBytes
[ 4] 4.00-5.00 sec 945 MBytes 7.93 Gbits/sec 1984 424 KBytes
[ 4] 5.00-6.00 sec 946 MBytes 7.94 Gbits/sec 1764 611 KBytes
[ 4] 6.00-7.00 sec 979 MBytes 8.21 Gbits/sec 1499 655 KBytes
[ 4] 7.00-8.00 sec 980 MBytes 8.22 Gbits/sec 1182 867 KBytes
[ 4] 8.00-9.00 sec 1008 MBytes 8.45 Gbits/sec 945 625 KBytes
[ 4] 9.00-10.00 sec 1015 MBytes 8.51 Gbits/sec 1394 611 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 9.45 GBytes 8.12 Gbits/sec 16474 sender
[ 4] 0.00-10.00 sec 9.44 GBytes 8.11 Gbits/sec receiver
iperf Done.
Using kernel IP forwarding, iperf3
shows the network bandwidth is about 8.12 Gbits per second.
Example 2: Using VPP with iperf3
First, disable kernel IP forward in csp2s22c03
to ensure the host cannot use kernel forwarding (all the settings in net2s22c05
and csp2s22c04
remain unchanged):
csp2s22c03$ echo 0 | sudo tee /proc/sys/net/ipv4/ip_forward
0
csp2s22c03$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0
You can use DPDK’s device binding utility (./install-vpp-native/dpdk/sbin/dpdk-devbind
) to list network devices and bind/unbind them from specific drivers. The flag “-s/--status
” shows the status of devices; the flag “-b/--bind
” selects the driver to bind. The status of devices in our system indicates that the two 40-GbE XL710 devices are located at 82:00.0
and 82:00.1
. Use the device’s slots to bind them to the driver uio_pci_generic
:
csp2s22c03$ ./install-vpp-native/dpdk/sbin/dpdk-devbind -s
Network devices using DPDK-compatible driver
============================================
<none>
Network devices using kernel driver
===================================
0000:03:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f0 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
0000:03:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f1 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
0000:82:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens802f0d1,ens802f0 drv=i40e unused=uio_pci_generic
0000:82:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens802f1d1,ens802f1 drv=i40e unused=uio_pci_generic
Other network devices
=====================
<none>
csp2s22c03$ sudo modprobe uio_pci_generic
csp2s22c03$ sudo ./install-vpp-native/dpdk/sbin/dpdk-devbind --bind uio_pci_generic 82:00.0
csp2s22c03$ sudo ./install-vpp-native/dpdk/sbin/dpdk-devbind --bind uio_pci_generic 82:00.1
csp2s22c03$ sudo ./install-vpp-native/dpdk/sbin/dpdk-devbind -s
Network devices using DPDK-compatible driver
============================================
0000:82:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=uio_pci_generic unused=i40e,vfio-pci
0000:82:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' drv=uio_pci_generic unused=i40e,vfio-pci
Network devices using kernel driver
===================================
0000:03:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f0 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
0000:03:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f1 drv=ixgbe unused=vfio-pci,uio_pci_generic *Active*
Start the VPP service, and verify that VPP is running:
csp2s22c03$ sudo service vpp start
csp2s22c03$ ps -ef | grep vpp
root 105655 1 98 17:34 ? 00:00:02 /usr/bin/vpp -c /etc/vpp/startup.conf
:w
105675 105512 0 17:34 pts/4 00:00:00 grep --color=auto vpp
To access the VPP CLI, issue the command sudo vppctl
. From the VPP interface, list all interfaces that are bound to DPDK using the command show interface:
VPP shows that the two 40-Gbps ports located at 82:0:0
and 82:0:1
are bound. Next, you need to assign IP addresses to those interfaces, bring them up, and verify:
vpp# set interface ip address FortyGigabitEthernet82/0/0 10.10.1.1/24
vpp# set interface ip address FortyGigabitEthernet82/0/1 10.10.2.1/24
vpp# set interface state FortyGigabitEthernet82/0/0 up
vpp# set interface state FortyGigabitEthernet82/0/1 up
vpp# show interface address
FortyGigabitEthernet82/0/0 (up):
10.10.1.1/24
FortyGigabitEthernet82/0/1 (up):
10.10.2.1/24
local0 (dn):
At this point VPP is operational. You can ping these interfaces either from net2s22c05
or csp2s22c04
. Moreover, VPP can forward packets whose IP address are 10.10.1.0/24
and 10.10.2.0/24, so you can ping between net2s22c05
and csp2s22c04
. Also, you can run iperf3
as illustrated in the previous example, and the result from running iperf3
between net2s22c05
and csp2s22c04
increases to 20.3 Gbits per second.
ET2S22C05$ iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[ 4] local 10.10.2.2 port 54078 connected to 10.10.1.2 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 2.02 GBytes 17.4 Gbits/sec 460 1.01 MBytes
[ 4] 1.00-2.00 sec 3.28 GBytes 28.2 Gbits/sec 0 1.53 MBytes
[ 4] 2.00-3.00 sec 2.38 GBytes 20.4 Gbits/sec 486 693 KBytes
[ 4] 3.00-4.00 sec 2.06 GBytes 17.7 Gbits/sec 1099 816 KBytes
[ 4] 4.00-5.00 sec 2.07 GBytes 17.8 Gbits/sec 614 1.04 MBytes
[ 4] 5.00-6.00 sec 2.25 GBytes 19.3 Gbits/sec 2869 716 KBytes
[ 4] 6.00-7.00 sec 2.26 GBytes 19.4 Gbits/sec 3321 683 KBytes
[ 4] 7.00-8.00 sec 2.33 GBytes 20.0 Gbits/sec 2322 594 KBytes
[ 4] 8.00-9.00 sec 2.28 GBytes 19.6 Gbits/sec 1690 1.23 MBytes
[ 4] 9.00-10.00 sec 2.73 GBytes 23.5 Gbits/sec 573 680 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 23.7 GBytes 20.3 Gbits/sec 13434 sender
[ 4] 0.00-10.00 sec 23.7 GBytes 20.3 Gbits/sec receiver
iperf Done.
The VCC CLI command show run displays the graph runtime statistics. Observe that the average vector per node is 6.76, which means on average, a vector of 6.76 packets is handled in a graph node.
Example 3. Using VPP with the TREX* Realistic Traffic Generator
In this example we use only two systems, csp2s22c03
and net2s22c05, to run the TRex Realistic Traffic Generator. VPP is installed in csp2s22c03
and run as a packet forwarding engine. On net2s22c05, TRex is used to generate both client and server-side traffic. TRex is a high-performance traffic generator. It leverages DPDK to run in user space. Figure 2 illustrates this configuration.
VPP is set up on csp2s22c03
exactly as it was in Example 2. Only the setup on net2s22c05
is modified slightly to run TRex preconfigured traffic files.
Figure 2 – The TRex traffic generator sends packages to the host that has VPP running.
To install TRex, in net2s22c05
, download and extract TRex package:
NET2S22C05$ wget --no-cache http://trex-tgn.cisco.com/trex/release/latest
NET2S22C05$ tar -xzvf latest
NET2S22C05$ cd v2.37
NET2S22C05$ sudo ./dpdk_nic_bind.py -s
Network devices using DPDK-compatible driver
============================================
0000:87:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=vfio-pci unused=i40e
0000:87:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' drv=vfio-pci unused=i40e
Network devices using kernel driver
===================================
0000:03:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f0 drv=ixgbe unused=vfio-pci *Active*
0000:03:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=enp3s0f1 drv=ixgbe unused=vfio-pci
0000:81:00.0 '82599 10 Gigabit TN Network Connection' if=ens787f0 drv=ixgbe unused=vfio-pci
0000:81:00.1 '82599 10 Gigabit TN Network Connection' if=ens787f1 drv=ixgbe unused=vfio-pci
Other network devices
=====================
<none>
Create the /etc/trex_cfg.yaml
configuration file. In this configuration file, the port should match the interfaces available in the target system, which is net2s22c05
in our example. The IP addresses correspond to Figure 2. For more information on the configuration file, please refer to the TRex Manual.
NET2S22C05$ cat /etc/trex_cfg.yaml
### Config file generated by dpdk_setup_ports.py ###
- port_limit: 2
version: 2
interfaces: ['87:00.0', '87:00.1']
port_bandwidth_gb: 40
port_info:
- ip: 10.10.2.2
default_gw: 10.10.2.1
- ip: 10.10.1.2
default_gw: 10.10.1.1
platform:
master_thread_id: 0
latency_thread_id: 1
dual_if:
- socket: 1
threads: [22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43]
Stop the previous VPP session and start it again in order to add a route for new IP addresses 16.0.0.0/8
and 48.0.0.0/8
, according to Figure 2. Those IP addresses are needed because TRex generates packets that use these addresses. Refer to the TRex Manual for details on these traffic templates.
csp2s22c03$ sudo service vpp stop
csp2s22c03$ sudo service vpp start
csp2s22c03$ sudo vppctl
_______ _ _ _____ ___
__/ __/ _ \ (_)__ | | / / _ \/ _ \
_/ _// // / / / _ \ | |/ / ___/ ___/
/_/ /____(_)_/\___/ |___/_/ /_/
vpp# sho int
Name Idx State Counter Count
FortyGigabitEthernet82/0/0 1 down
FortyGigabitEthernet82/0/1 2 down
local0 0 down
vpp#
vpp# set interface ip address FortyGigabitEthernet82/0/0 10.10.1.1/24
vpp# set interface ip address FortyGigabitEthernet82/0/1 10.10.2.1/24
vpp# set interface state FortyGigabitEthernet82/0/0 up
vpp# set interface state FortyGigabitEthernet82/0/1 up
vpp# ip route add 16.0.0.0/8 via 10.10.1.2
vpp# ip route add 48.0.0.0/8 via 10.10.2.2
vpp# clear run
Now, you can generate a simple traffic flow from net2s22c05
using the traffic configuration file cap2/dns.yaml
:
NET2S22C05$ sudo ./t-rex-64 -f cap2/dns.yaml -d 1 -l 1000
summary stats
--------------
Total-pkt-drop : 0 pkts
Total-tx-bytes : 166886 bytes
Total-tx-sw-bytes : 166716 bytes
Total-rx-bytes : 166886 byte
Total-tx-pkt : 2528 pkts
Total-rx-pkt : 2528 pkts
Total-sw-tx-pkt : 2526 pkts
Total-sw-err : 0 pkts
Total ARP sent : 4 pkts
Total ARP received : 2 pkts
maximum-latency : 35 usec
average-latency : 8 usec
latency-any-error : OK
On csp2s22c03
, the VCC CLI command show run displays the graph runtime statistics:
Example 4: Using VPP with TRex Mixed Traffic Templates
In this example, a more complicated traffic with delay profile on net2s22c05
is generated using the traffic configuration file avl/sfr_delay_10_1g.yaml
:
NET2S22C05$ sudo ./t-rex-64 -f avl/sfr_delay_10_1g.yaml -c 2 -m 20 -d 100 -l 1000
summary stats
--------------
Total-pkt-drop : 43309 pkts
Total-tx-bytes : 251062132504 bytes
Total-tx-sw-bytes : 21426636 bytes
Total-rx-bytes : 251040139922 byte
Total-tx-pkt : 430598064 pkts
Total-rx-pkt : 430554755 pkts
Total-sw-tx-pkt : 324646 pkts
Total-sw-err : 0 pkts
Total ARP sent : 5 pkts
Total ARP received : 4 pkts
maximum-latency : 1278 usec
average-latency : 9 usec
latency-any-error : ERROR
On csp2s22c03
, use the VCC CLI command show run to display the graph runtime statistics. Observe that the average vector per node is 10.69 and 14.47:
Summary
This tutorial showed how to download, compile, and install the VPP binary on an Intel® Architecture platform. Examples of /etc/sysctl.d/80-vpp.conf
and /etc/vpp/startup.conf/startup.conf
configuration files were provided to get the user up and running with VPP. The tutorial also illustrated how to detect and bind the network interfaces to a DPDK-compatible driver. You can use the VPP CLI to assign IP addresses to these interfaces and bring them up. Finally, four examples using iperf3
and TRex were included, to show how VPP processes packets in batches.
About the Author
Loc Q Nguyen received an MBA from University of Dallas, a master’s degree in Electrical Engineering from McGill University, and a bachelor's degree in Electrical Engineering from École Polytechnique de Montréal. He is currently a software engineer with Intel Corporation's Software and Services Group. His areas of interest include computer networking, parallel computing, and computer graphics.