Introduction
The intent of this tutorial is to show how to create a playground-like environment for the prototyping and evaluation of potential edge services. It serves as an introduction to the chosen technologies and contains instructions to create a test environment on various infrastructures, including the Intel Atom® C3000 processor series kits for use as the Network Function Virtualized Infrastructure (NFVI). To accelerate packet processing, one physical network interface controller (NIC) will be passed down to the containers by making use of the Data Plane Development Kit (DPDK) and a plugin for Vector Packet Processing (VPP). Alternatively, the same setup can be achieved by using a virtual machine that can be created using instructions in the tutorial VNF-in-a-Box: Set Up a Playground for Edge Services on a Virtual Machine.
Playground Components
This playground includes the following tools and frameworks:
Kata Containers
Kata Containers is a Docker* runtime alternative for greater workload isolation and security. It is an open source project designed to bring together the advantages of virtual machines and containers to build a standard implementation of lightweight virtual machines (VMs) that act and perform like containers but provide the workload isolation and security advantages of VMs. Because the Kata runtime is compatible with the Open Container Initiative* (OCI) specs, Kata Containers can run side by side with Docker (runc*) containers — even on the same host — and work seamlessly with the Kubernetes* Container Runtime Interface (CRI). Kata Containers enjoys industry support from some of the world's largest cloud service providers, operating system vendors, and telecom equipment manufacturers. The code is hosted on GitHub* under the Apache* License Version 2 and the project is managed by the OpenStack* Foundation.
Data Plane Development Kit (DPDK)
Data Plane Development Kit (DPDK) is a Docker networking alternative that consists of libraries to accelerate packet processing workloads running on a wide variety of CPU architectures.
Open Baton
Open Baton is a network functions virtualization (NFV) management and orchestration (MANO) framework, driven by Fraunhofer FOKUS and TU Berlin. It provides full automation of service deployment and lifecycle management and is the result of an agile design process for building a framework capable of orchestrating virtualized network functions (VNF) services across heterogeneous infrastructures.
Vector Packet Processing (VPP)
VPP is the open source version of the vector packet processing technology from Cisco*, a high-performance packet-processing stack that can run on commodity CPUs.
Tutorial Goal
The goal in the following sections is to set up a test environment that uses Docker to deploy Kata Containers. The container will be connected to one of the physical NICs of the host, providing a boost in packet throughput. This is achieved by configuring the host to allocate hardware resources to VPP (using DPDK), including RAM (in the form of hugepages), CPUs (by dedicating cores to DPDK), and network resources (using DPDK NIC drivers). The original operating system will no longer manage these resources and cannot interfere with subsequent operations. Finally, this tutorial introduces Open Baton as an NFV MANO framework with a modified/enhanced Docker Virtual Network Function Manager (VNFM). We’ll use it to deploy DPDK-empowered virtual network functions (VNFs) on the Intel Atom C3000 processor series kits. This tutorial is partially based on the following guides:
Kata Container Developer Guide
Prerequisites
Hardware
We used the following hardware for this tutorial:
- Intel Atom® C3000 processor series kit
- Workstation with Ubuntu* 18
- Switch + Ethernet cables
- USB Stick (to install CentOS* on the Intel Atom C3000 processor series kit)
The specifications of the Intel Atom C3000 processor series kit we used for this tutorial are as follows:
Intel Atom® C3000 processor series | Specs |
---|---|
CPU | 4 cores, frequencies - 2.2 GHz |
Ethernet | 4x 1 GbE ports (RJ-45) |
I-O connectors | 1x USB 3.0 port, 1x micro-USB console port |
Software
The configuration has been tested using the following software:
Software | Version |
---|---|
OS | CentOS* 7 |
Kernel | 3.10.0-862.14.4.el7.x86_64 |
Docker* | 18.09.0-ce (package) |
DPDK-usertools | 17.11.2 (source) |
VPP | 18.10-release (package) |
DPDK-VPP | 18.08 (installed with VPP) |
Go* | 1.9.4 (package) |
Kata Containers | 1.4.0 (source/package) |
QEMU-lite | 2.11 (installed with Kata) |
VPP CNM plugin | latest (10 Mar 2018) |
Configure the Playground – Intel Atom® C3000 Processor Series Kit
This chapter deals with the configuration and installation on real hardware using the recently released Intel Atom C3000 processor series kit. It works perfectly for this playground, providing a small box with enough resources and the required features.
Memory: 8 GB DDR4-1866
Depending on how you received the hardware, you may have to set up and configure the box via the serial console. This procedure is described in the user manual of the Intel Atom C3000 processor series kit. To follow this tutorial, you’ll need an open terminal session with root rights connected to the box (e.g., Laptop running Ubuntu* 16.04 - connected via USB).
CPU Flags and HugePages
The first step is to use grand unified bootloader (GRUB) to set a few CPU flags and allocate hugepages. You must determine whether to allocate the hugepages at boot or afterward. You can use 2 MB (hugepage) or 1 GB (gigapage) pages. In this tutorial, we will allocate 1600 * 2 MB pages after booting as we have a total of about 8 GB RAM available on the machine. These hugepages will be used by VPP as well as by the Kata Containers. The intent of the hugepages is to increase overall performance. Specifically, the translation lookaside buffer (TLB) between the CPU and CPU cache caches the virtual to physical address mapping.
Assuming the TLB contains 256 entries and each entry can save 4,096 bytes, it will be able to store up to 1 MB in the TLB without using hugepages. When using hugepages, the TLB entry can now point to 2 MB, which increases the memory mapping capability to 500 MB.
Locate the line starting with GRUB_CMDLINE_LINUX_DEFAULT, which we’ll modify to enable input-output memory management system (IOMMU), Intel® Virtualization Technology (Intel® VT) to pass hardware resources down to virtual machines.
You may also consider limiting the available CPUs for the operating system, and to assign dedicated CPUs to the VPP-DPDK environment.
The result may look like the following, depending on your preferences:
Now let’s allocate a few hugepages. We will check the default settings and currently available hugepages and afterward apply a new configuration.
Depending on your configuration, the hugepage setup might look like the following:
Because we have modified the GRUB configuration we’ll rebuild the GRUB config file.
Network Configuration
This assumes you are connected to the Intel Atom C3000 processor series kit via the serial port. As the OS is CentOS* 7, we will set up the network interfaces with static IPv4 addresses as follows.
Consider adding your secure socket shell (SSH) public-key to the authorized users file to avoid the need to always type your credentials when connecting to the box. Since we have changed the GRUB config and set up static IPv4 addresses we should perform a reboot.
Set Up Internet Connectivity
Ensure that your default route is set correctly and that you added a nameserver in your resolv.conf.
To download and install packages from the internet requires connecting the machine to a network with internet access or allowing a connection to the net via another computer.
This configuration is not persistent, and you will lose connectivity after a reboot.
Install Go*
To build the VPP Docker plugin and the Kata Containers components we need to install Go.
Install Docker*
Because the Kata runtime is a replacement for the default Docker runtime (runc) we will have to install Docker as well.
Install Kata Containers
Decide how to install the Kata Containers components. You can use the prebuilt packages from their repositories or check out the source code and build them yourself. It is also possible to run a mixed setup.
Download and Install DPDK
We will download, build and install the DPDK source code manually, making use of the usertools of this installation to build the igb_uio DPDK NIC driver.
Open config/common_base to set your preferred build options.
Also, be sure to disable the KNI-related build options in config/common_linuxapp.
Now we are ready to build and install DPDK.
Afterwards, we will add the usertools to our PATH.
Load Necessary Drivers
Bind the NIC Supporting DPDK to the DPDK Driver
We will choose NIC number 2 (0000:03:00.1) for DPDK support. Before we can, we have to check to see if the kernel already brought up the NIC interface.
If the NIC you want to use for VPP-DPDK is shown as active, you will have to take the interface down as the kernel should not control this interface. The procedure may look like the following:
Install VPP
For this tutorial, we will go with version 18.07 of VPP. It installs its own DPDK version, which you can check via the command vppctl show dpdk version. To add the repository to CentOS, create: /etc/yum.repos.d/fdio-release.repo with the following input:
Now install VPP.
Bring up the network interface now handled by VPP:
Install Kata VPP Docker* Plugin
The Kata VPP Docker plugin is used to create the VPP virtual host (vhost) user interface, which is attached to the Kata Containers.
Docker* Runtime Configuration
We will create a file for configuring the default Docker runtime.
The file contains the available runtimes. To use Docker via a remote device (which does not have access to the Docker socket) you need to redefine the sockets. This is already done in the example below. Using this example, the default runtime will be the Kata runtime.
To contact the Docker API via a remote machine, allow tcp connections through the firewall.
After modifying the Docker runtime, restart Docker.
Disable Security-Enhanced Linux (SELinux)
If we do not disable SELinux for this setup, VPP will have problems creating the sockets. Edit /etc/sysconfig/selinux as follows:
Optional - Create Docker* Networks
At this stage, we can decide whether we want to create the necessary Docker networks manually or let Open Baton create them automatically.
If you choose to let Open Baton create the networks, beware because after each deployment the network will block the creation of a new network with the same CIDR. You will have to delete the network first in order to start another deployment.
Set Up Open Baton with Docker-Compose
In this setup we will run Open Baton directly on the hardware under consideration. We can easily get a working environment up and running by using a Docker-compose file. We will use the default Docker runtime (runc) for this setup as we want to reserve the remaining resources for the Kata Containers.
Install Docker-Compose
We will stick with the file version 2.x – using version 3.x eliminates the ability to set memory limits and runtime values for default deployments.
Use this yaml file to deploy Open Baton:
If you saved the content in a file (e.g., OpenBaton.yaml), start it up by using the following command:
If you are new to Open Baton, the GitHub documentation is a good starting point. However, the basic workflow is covered in this document. Once the machines are deployed, it will take a few minutes for Open Baton to configure itself. When it finishes you will be able to access the dashboard via your browser: 192.168.0.2:8080. The default credentials are admin with the password openbaton. We will use the dashboard from a remote machine.
Register a Point of Presence
In order to deploy our VNFs on our infrastructure, we need to tell Open Baton where and how to contact it. This is done by registering a Point of Presence (PoP), which in this case is our newly created Kata environment. Note that we use the local Docker socket. Alternatively, you can also insert the URL of your environment if you enabled the remote API. Or you can either copy and paste the JSON definition of PoP (see below) or enter it manually in the form.
The tab for registering a PoP is below:
Prepare the Docker* Image
Use any Docker image for your VNFs. You can create an original image or use a preexisting one. For this tutorial we will create our own Docker image using a Dockerfile that will use alpine as a base image, install, and start an Iperf Server. Execute the following commands directly in the CLI on your box:
Onboard VNFDs and NSD
Next, we upload our VNFDs. As this tutorial involves a very basic use case, it will work with what we have available on the Docker images. This means there are no lifecycle scripts to be executed; simply upload a basic Network Services Descriptor (NSD). To do this we navigate to the NS Descriptors tab contained in the Catalogue drop-down menu in the left bar.
You may use this JSON file representing our NSD:
Now you have uploaded the tutorial NSD, which consists of two Iperf Servers that will be deployed in two separate networks. One will be deployed using the VPP DPDK network (vpp_net), the other will use the default Docker bridge (normal_net).
Deploy the Network Service
Now that we have saved our NSD we can deploy it. Again, we have to navigate to the NS Descriptors tab and select our just onboarded NSD Iperf-Servers.
We choose to deploy it on our infrastructure, which we have named silicombox, thus we add this PoP to both of our VNFDs (Iperf-Server-DPDK and Iperf-Server-Normal). Afterwards, we can launch our NSD.
Now that the NSR is deployed, you can navigate to the NS Records tab, which you find inside the Orchestrate NS drop-down menu. Here you can see all your deployed Network Services, so-called Network Service Records (NSRs), and the execution of the different life cycles of your NSR and VNFRs. After a short time, we should see our NSR Iperf-Servers in ACTIVE state.
Using the Docker CLI, you can see that your NSR is deployed in addition to Open Baton.
Via the following command you can check the logs of the running service to see further details (e.g. throughput, packetloss).
The following diagram describes our complete setup. We cannot yet directly reach the deployed Iperf Servers via the remote machine.
How to Use the Network Service
Since we have deployed two Iperf Servers, we can now use an Iperf Client to test the networks. Depending on your setup you may use another machine (such as the laptop) or VM connected to the networks to do so. Since we want to reach machines behind a network address translation(NAT) network, we will need to add a route to the isolated machine in order to reach the Kata Containers.
We must also add the DPDK interface to the bridge of the Kata Container interface in VPP. Otherwise, the Iperf Server running in the DPDK network will not be able to reach outside of the machine.
Now both VNFs should be reachable via a remote machine, which must be connected to the bridge and the DPDK network. Using the Iperf Client we can start testing both networks.
If you experience connectivity issues using Iperf in user datagram protocol (UDP) mode, check the iptables on your machine hosting the Kata environment.
We ran our experiments to compare both network types using two Intel Atom C3000 processor series kits connected via gigabit switches. The experiments showed that the VPP DPDK network exhibits increased packet throughput in handling small UDP packets.
Summary
In this tutorial, we've set up a playground around Open Baton NFV MANO, Kata Containers, and DPDK. We can deploy VNFs onto our infrastructure, which can make use of the DPDK enabled boost in packet throughput. By running the Iperf use case, using small UDP packets, we can verify the advantage of the VPP-DPDK network compared to the default network.
As we are using a very basic setup regarding the configuration of VPP, DPDK and Kata Containers, the next step would be to adjust the setup to further increase performance. This presents a good starting point to learn more about DPDK since we have a running setup which we can directly benchmark to evaluate any changes in configuration.
To get farther into the topic of NFV, we can write our own network functions using Open Baton and deploy them on our Kata Container DPDK enabled infrastructure.