Part 1: Build & Prepare to Launch Open vSwitch*

Learn how to build a virtual machine to host two containers that communicate directly with each other.

Hi, I'm Clayne with Intel Corporation. And I work with [software-defined networking, network function virtualization] SDN NFV in the Data Center. And today, I'm going to build a virtual machine that hosts two containers that communicate directly with each other using Open vSwitch* and [the Data Plane Developer Kit] DPDK.

"Why?" you ask. Because containers are more efficient than [virtual machines] VMs and that means money. There are a few drawbacks, however, because containers still use the Linux* kernel networking stack and any exceptions require elevated privileges.

Why use DPDK with Open vSwitch? Because packet throughput is a lot higher as you can see in this chart. And CPU interrupts are drastically cut as small packets are batched and processed in user space.

This is especially meaningful in telecommunications networks. This is the first video in a series of fast networking in the Linux data center using containers and DPDK. In this video, we'll build OpenvSwitch and DPDK and allocate system resources.

There are three additional short videos that will cover setting OpenvSwitch parameters and launching it, building two containers (one for testpmd and the other for packaging), and then starting testpmd and pktgen in those containers and seeing the packets flow. You can type along using the commands that will show you, or you can download the scripts and run them directly. Any system that's going to do this lab has some hardware requirements.

First of all, you need at least eight gigabits of RAM. If you have eight CPUs, that's great, too. But the CPUs can be virtual.

To get started, install Vagrant and a virtual machine provider like VirtualBox*, and then download the Vagrant file shown on your screen at the bit.ly URL. When you execute vagrantup, the Vagrant file will download all the required packages and scripts. Once Vagrant has provisioned your virtual machine, enter the VM by executing Vagrant ssh.

While the Vagrant file will do most of the set up, it's critical in this lab that every time you execute Vagrant ssh in a new terminal window that you also run source step00-setemv in the directory shown here. Let's get started.

Step one: Once you're in the Vagrant VM environment, build DPDK so that Open vSwitch can use it. First, cd into the directory containing the DPDK sources. Then run make config specifying T for target, and O for output values equal to DPDK_BUILD, and then change into that DPDK_BUILD directory and run make-j8, which speeds things up a bit by using eight threads.

Step two: Build Open vSwitch. Change into the directory containing the Open vSwitch sources, and then run auto configuration setup using the provided boot script. Then, build Open vSwitch with DPDK case support for the current machine architecture by passing -march=native CFLAGS to the configure script, and then specifying the location of the DPDK binaries we just built. Then run make-j8.

Step three: Allocate resources to run Open vSwitch. Make two OpenvSwitch directories: one at /usr/local/etsi and the other at /usr/local/var/run. Next, allocate 2048 huge pages using the command shown, and then mount the huge page file system at /mnt/huge, and then make sure it worked. Finally, load the userspace I/O driver and the DPDK igb_uio driver that we built earlier into the kernel.

In this video, we built the DPDK, and then we built Open vSwitch, and then we allocated system resources. This is the first video in a series on fast networking in the Linux data center. In the next video, we will launch Open vSwitch using DPDK. Stay tuned.