Intel® oneAPI Components
Analyzers and Debuggers
Analyzers
Intel® Advisor
Design code for efficient vectorization, threading, and offloading to accelerators.
Intel® VTune™ Profiler
Find and optimize performance bottlenecks across CPU, GPU, and FPGA systems.
Intel® SoC Watch
Analyze system power and thermal behavior on Intel® platforms with this command line tool.
Debuggers
Intel® Distribution for GDB*
Enable deep, system-wide debugging of C, C++, and Fortran code.
Intel® System Debugger
Speed up system bring-up and validation of system hardware and software using in-depth debug and trace of BIOS/UEFI, firmware, device drivers, operating system kernels, and more.
Code Migration
Intel® DPC++ Compatibility Tool
Migrate legacy CUDA* code to a multiplatform program in DPC++ code with this assistant.
Compilers
Intel® Fortran Compiler
Use this standards-based Fortran compiler with OpenMP* support for CPU and GPU offload.
Intel® Implicit SPMD Program Compiler (Intel® ISPC)
Compile using a variant of the C programming language with extensions for SPMD programming for the fastest rendering performance.
Intel® oneAPI DPC++/C++ Compiler
Compile and optimize code for CPU, GPU, and FPGA target architectures.
FPGA Support Package for the Intel oneAPI DPC++ / C++ Compiler
Accelerate your RTL development with SYCL* high-level synthesis (HLS). This add-on requires installation of the Intel® oneAPI DPC++ / C++ Compiler.
High-Performance Python
Intel® Distribution for Python*
Achieve fast math-intensive workload performance without code changes for data science and machine learning problems.
AI & Machine Learning Tools
PyTorch* Optimizations from Intel
Intel is one of the largest contributors to PyTorch*, providing regular upstream optimizations to the PyTorch deep learning framework that provide superior performance on Intel architectures. The AI Tools Selector includes the latest binary version of PyTorch tested to work with the rest of the kit, along with Intel® Extension for PyTorch*, which adds the newest Intel optimizations and usability features.
TensorFlow* Optimizations from Intel
TensorFlow* has been directly optimized for Intel architecture, in collaboration with Google*, using the primitives of Intel® oneAPI Deep Neural Network Library (oneDNN) to maximize performance. The AI Tools Selector provides the latest binary version compiled with CPU-enabled settings, along with Intel® Extension for TensorFlow*, which seamlessly plugs into the stock version to add support for new devices and optimizations.
Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source, machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
Reduce model size and speed up inference for deployment on CPUs or GPUs. The open source library provides a framework-independent API to perform model compression techniques such as quantization, pruning, and knowledge distillation.
Intel® Extension for Scikit-learn*
scikit-learn* is one of the most widely used Python* packages for data science and machine learning. Intel® Extension for Scikit-learn* provides a seamless way to speed up many scikit-learn algorithms on Intel CPUs and GPUs, both single- and multi-node.
Intel® Optimization for XGBoost*
This open source, machine learning framework includes optimizations contributed by Intel. It runs on Intel hardware through Intel software acceleration powered by oneAPI libraries. No code changes are required.
Modin*
Scale data preprocessing across multi-nodes using this intelligent, distributed DataFrame library with an identical API to pandas. Choose back end distributed processing engines: Ray, Dask*, and message passing interface (MPI).
Performance Libraries
GStreamer Video Analytics Plug-ins
Use the GStreamer framework and build efficient, scalable video analytics applications with optimized plug-ins for video decode, encode, and inference.
Intel® Cryptography Primitives Library
Secure, fast, lightweight building blocks for cryptography, optimized for Intel CPUs.
Intel® Integrated Performance Primitives (Intel® IPP)
A secure, fast, and lightweight library of building blocks to speed up performance of imaging, signal processing, data compression, cryptography, and more.
Intel® MPI Library
Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture.
Intel® oneAPI Collective Communications Library (oneCCL)
Implement optimized communication patterns to distribute deep learning model training across multiple nodes.
Intel® oneAPI Data Analytics Library (oneDAL)
Boost machine learning and data analytics performance.
Intel® oneAPI Deep Neural Network Library (oneDNN)
Develop fast neural networks on Intel CPUs and GPUs with performance-optimized building blocks.
Intel® oneAPI DPC++ Library (oneDPL)
Speed up data parallel workloads with these key productivity algorithms and functions.
Intel® oneAPI Math Kernel Library (oneMKL)
Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
Intel® oneAPI Threading Building Blocks (oneTBB)
Simplify parallelism with this advanced threading and memory-management template library.
Intel® Video Processing Library (Intel® VPL)
Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing for broadcasting, live streaming and VOD, cloud gaming, and more.
Rendering and Ray Tracing Libraries
Improve the performance of photorealistic rendering applications with this library of ray tracing kernels. The kernels are optimized for the latest Intel processors with support for Intel® Streaming SIMD Extensions [4.2] through to the latest Intel® Advanced Vector Extensions 512.
Improve image quality with machine learning algorithms that selectively filter visual noise. This independent component can be used for noise reduction on 3D rendered images, with or without Intel® Embree.
Use a software rasterizer that's compatible with OpenGL* to work with datasets when GPU hardware isn’t available or is limiting.
Note Intel® OpenSWR is available as part of the Mesa OpenGL open source community project at OpenSWR.
Intel® Open Path Guiding Library (Intel® Open PGL)
Increase rendering performance by improving the sampling quality of complex light transport effects. Facilitate state-of-the-art path-guiding algorithms into your renderer.
Intel® Open Volume Kernel Library (Intel® Open VKL)
Enable rendering and simulation processing of 3D spatial data with low-level volumetric data-processing algorithms.
Develop interactive, high-fidelity visualization applications using this rendering API and ray tracing engine.
Connect the Intel® Rendering Toolkit libraries in your application to the universal scene description (USD) Hydra* rendering subsystem by using the Intel® OSPRay for Hydra* plug-in. This plug-in enables fast preview exploration for compositing and animation, as well as high-quality, physically based photorealistic rendering of USD content.
Perform high-fidelity, ray traced, interactive, and real-time rendering through a graphical user interface with this new scene graph application addition to Intel® OSPRay.
Other Tools
Eclipse* IDE Plug-Ins
Simplify application development for systems and IoT edge devices with this standards-based development IDE with the provided Eclipse* plug-ins.
Requires a separate download.
Linux* Kernel Build Tools
Using specialized platform project wizards that are integrated with Eclipse, quickly create, import, and customize Linux kernels based on the Yocto Project* for edge devices and systems.