Why oneMKL? Accelerate Math Computation on the Latest Hardware
Why oneMKL? Accelerate Math Computation on the Latest Hardware
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
With 20 years of maturity under its belt, Intel® Math Kernel Library remains the fastest and most-used math library for Intel-based systems and continues to hold this distinction based on continual optimizations that result in best-in-class performance.
This session focuses on its most recent iteration: Intel® oneAPI Math Kernel Library (oneMKL), optimized for implementing fast math-processing routines targeting heterogeneous, multiarchitecture compute.
The session includes:
- How to use oneMKL to take the best advantage of the latest built-in hardware acceleration engines such as Intel® Advanced Vector Extensions 512, Intel® Advanced Matrix Extensions, and the new bfloat16 data type commonly used for machine learning.
- An illustration—with syntax specifics—of how function domains (Basic Linear Algebra Subprograms [BLAS], Linear Algebra Package [LAPACK], fast Fourier transform [FFT], Rnd, PARDISO) take advantage of the 4th gen Intel® Xeon® Scalable processor and Intel® Max Series product family.
- How oneMKL supports the latest OpenMP* standard, expansion into SYCL*, and open-standards-based C++ cross-architectural compute framework.
- Instruction and a demo of how to map CUDA* math library calls (for example, cuBLAS, cuFFT, and cuRAND libraries) to oneMKL
Skill level: Intermediate and expert
Featured Software
Download the stand-alone version of oneMKL or as part of the Intel® oneAPI Base Toolkit.
Code samples (GitHub*):
Accelerate math processing routines and increase performance with advanced math routines and functions for science, engineering, or financial applications.
Related On-Demand Webinar