Cloud-Native Hybrid-Multicloud Platform (NFV)
This Reference Implementation illustrates how to build a cloud-native hybrid-multicloud platform with network functions virtualization (NFV).
Hybrid-cloud-capable, cloud-native infrastructure is central to data center deployments, whether its databases, AI, machine-learning, or telecommunications workloads. Cloud-native applications use a distributed cloud approach, with some workloads running in private clouds and others running in public clouds. This Reference Implementation is a family of workload-optimized infrastructure solutions based on 2nd Generation Intel® Xeon® Scalable processors and other Intel® technologies. The audience for this Reference Implementation includes network operators, communications service providers (CoSPs, also referred to as telco operators), cloud service providers, and enterprise infrastructure companies.
Infrastructure modernization, automation, and cloud-native containers are important aspects of business transformation. The portability and repeatability of containers can create cost and resource savings, coupled with faster time to market and rapid innovation. Containers have little overhead, which helps lower hardware, maintenance, and licensing costs. They can be implemented quickly, and components can be shared among containers.
Organizations need high performance from their workloads to remain competitive. Intel and Red Hat co-developed this Reference Implementation using Red Hat OpenShift Container Platform 4.5 and Red Hat OpenShift Data Foundation. This release introduces key features that enable enterprise IT and CoSPs to deploy performant, low-latency container-based workloads onto different footprints. Those footprints could be bare metal, virtual, private cloud, public cloud, or a combination of these, in either a centralized data center or at the edge.
Red Hat OpenShift Container Platform 4.5 addresses challenges that CoSPs face when they are deploying and scaling networking workloads, such as IPsec Gateways, VPNs, 5G UPF, 5G EPC, CMTS, BNGs, and CDNs. This Reference Implementation can assist with improving network services performance, resource management, telemetry, and OSS/BSS thus making it a preferred container platform for running demanding telecommunications applications.
For telecommunications use cases, deterministic network services performance is key to help ensure that services like 911 calls dont get dropped. This Reference Implementation includes Intel® processors and Intel® Ethernet Network Adapters with DPDK to achieve high-performance networking. Other Intel technologies include Intel® Optane persistent memory (PMem); NVMe-based Intel® Optane SSDs and Intel® 3D NAND SSDs for storage; and Intel® QuickAssist Technology for Public Key Exchange, bulk encryption/decryption, and compression and decompression acceleration.
Advantages of this reference Implementation include the following:
- Spend less time and expense evaluating hardware and software options.
- Simplify design choices and deploy quickly by bundling validated firmware, hardware, and software in a prescriptive, tested configuration that lowers risk and guesswork.
- Innovate on a verified configuration, accelerating time to deployment.
- Achieve deterministic performance for telecommunications, enterprise, and hybrid-multicloud workloads, to meet SLAs.
This Reference Implementation has the following characteristics:
- Includes Intel® architecture-optimized AI libraries and tools for developers, along with validated, bundled containers for ease of DevOps deployment.
- Helps modernize data centers and take advantage of containers while lowering total costs.
- Available in Base, Plus, and Edge configurations, which are customizable and fully interoperable with existing infrastructure.
This Reference Implementation can help enterprises and CoSPs to quickly release new services with efficiency and scalability.