Intel® oneAPI Math Kernel Library (oneMKL) Essentials
Learn how to create performant applications and speed up computations with low-level math routines using the oneAPI programming model.
Overview
The Intel® oneAPI Math Kernel Library enhances math routines such as vector and matrix operations from Basic Linear Algebra Subprograms (BLAS) and the Linear Algebra Package (LAPACK), fast Fourier transforms (FFT) and random number generator (RNG) functions. This toolkit extends heterogeneous computing functionality via the SYCL* and OpenMP* offload interfaces.
Use this learning path to get hands-on practice with Intel® oneMKL using a Jupyter* Notebook.
Objectives
Who is this for?
Developers who want to learn the basics of oneMKL for heterogeneous computing via SYCL and OpenMP offload interfaces.
What will I be able to do?
Practice the essential concepts and features of oneMKL.
Prerequisites
oneMKL simplifies the use of the oneAPI programming model and handles much of the work for you. To maximize your learning, complete these prerequisites:
Essentials of SYCL: Complete the first three modules.
- Introduction to SYCL
- SYCL Program Structure
- SYCL Unified Shared Memory
OpenMP Offload Basics: Complete all the modules.
Modules
Introduction to JupyterLab and Jupyter* Notebook
Use a Jupyter Notebook to modify, compile, and run code as part of the learning exercises.
Note If you are already familiar with Jupyter Notebooks, you may skip this module.
To begin, open Introduction_to_Jupyter.ipnyb.
GEMM: Use SYCL and Buffer Model
- Implement a GEMM matrix multiplication application with the buffer and accessor style of memory management.
- Successfully compile and run the GEMM application using SYCL.
GEMM: Use SYCL* Unified Shared Memory (USM)
- Set up the DPC++ components necessary to run the oneMKL GEMM operation using a unified shared memory model with implicit memory management.
- Successfully compile and run the GEMM application using SYCL.
GEMM: Use OpenMP* Offload
- Implement a oneMKL GEMM application using OpenMP Offload.
- Learn the compiler directives needed to manage memory, dispatch oneMKL functions, and then select the offload devices using OpenMP for the GEMM operation.
- Compile and run the GEMM application using the Intel® compiler with OpenMP Offload support, and then verify the results of the offloaded task.
Get Help
Your success is our success. Access the forum resources when you need assistance with the Intel oneAPI Math Kernel Library.