Enterprise Edge Networking Developer Kits
Prevalidated Software
Get the most out of your hardware performance with the OpenVINO™ toolkit, Data Plane Development Kit (DPDK*), Intel crypto libraries (IPsec_mb and IPP_crypto) on Ubuntu* Desktop LTS.
Pretrained Models for Acceleration
Choose from a variety of optimized detection and recognition models for developing deep learning applications.
Training Extensions for Deep Learning
Modify, customize, train, and extend computer vision models for deep learning and inference optimization.
Overview
Build solutions that can handle demanding use conditions in retail, industrial, and healthcare, and offer network security at the edge on developer kits which range from 14th generation Intel® Core™ processors to Intel Atom® x7000C and x7000RE processors. These developer kits support pretested and prevalidated network security and software-defined wide area network (SD-WAN) software packages to deliver compute performance in parallel with accelerated AI inferencing and computer vision for your solution.
- Delivers outstanding multithreaded performance with the latest generation of performance-hybrid architecture that combines performance-cores with efficient-cores and increased cache sizes.
- Provides accelerated AI and deep learning capabilities without additional hardware or offload multiple workloads to enhanced Intel® UHD Graphics driven by Xe architecture
- Supports more I/O connectivity and throughput with up to 16 PCIe* 5.0 lanes, including Intel® Wi-Fi 6E Module.
Who Needs This Product
System integrators, independent software vendors (ISV), and enterprise developers who need to develop solutions for network security and SD-WAN:
- Optimized for high throughput and low latency, with built-in acceleration for packet and signal processing, load balancing, and AI.
- Provides higher memory bandwidth with I/O capacity to meet the next-gen performance requirements for NetSec SD-WAN deployments.
- Increase network capacity to handle more traffic and deliver user experiences with accelerated encryption.
To determine the features and capabilities implemented by Intel's ecosystem collaborators, see Hardware.
Reference Implementations
Prebuilt and validated reference implementations are available for developers to test and deploy network security and SD-WAN.
Your experience may vary depending on the configuration of your developer kit. For details, see the target system requirements in the reference implementation.
Hardware
Prevalidated developer kits.
Lanner NCA-4240 1U 19” Rackmount Appliance
12th, 13th, and 14th generation Intel Core processors with H610E/Q670E chipset
- 1x GbE RJ45, 8x 2.5GbE RJ45, 1x NIC module
- 3x pairs of 3rd generation SE LAN bypass
- 2x 288-pin DIMM DDR5 5600 MHz (maximum 64 GB)
- 2x USB 3.0 ports, 2x 2.5” HDD/SSD
Senao Networks SA9820 Series
- Powered by Intel Atom x7000C processor for efficient computing
- Offers versatile connectivity: dual GbE, dual 2.5GbE, and dual SFP ports
- Supports Power over Ethernet (PoE) enabled Ethernet interface
- Supports Wi-Fi 6 and 5G with MIMO for industrial IoT, smart city infrastructure, and telecom networks
Silicom Ibiza Commercial 1U Edge Gateway Router
- Powered by Intel Atom x7000C/E/RE processor
- Offers versatile connectivity: 4x 2.5 GbE and one SFP port
- Supports PoE enabled Ethernet interface
- Supports dual band 802.11ax Wi-Fi 6 and 4G/5G with dual SIM
Software
Intel® Distribution of OpenVINO™ Toolkit
Enable convolutional neural network-based deep learning inference on the edge. Support heterogeneous running across various accelerators—CPUs, GPUs, Intel® Movidius™ Neural Compute Stick (NCS), and Intel® Vision Accelerator Design products—using a common API.
Speed up time to market via a library of functions and preoptimized kernels.
Overview | Training | Documentation | Get Started | Forum
Intel® Extension for TensorFlow*
Intel® Extension for TensorFlow* is a heterogeneous, high-performance, deep-learning extension plug-in. This extension:
- Is based on the TensorFlow Pluggable Device interface to bring Intel CPUs, GPUs, and other devices into the TensorFlow open source community for AI workload acceleration
- Provides users to flexibly plug an XPU into TensorFlow showing the computing power inside Intel hardware
- Upstreams several optimizations into open source TensorFlow