OpenMP* Offload Basics
Learn the fundamentals of using OpenMP* offload directives to target GPUs through hands-on practice in this guided learning path.
Overview
OpenMP* offload constructs are a set of directives for C++ and Fortran that were introduced in OpenMP 4.0 and further enhanced in later versions. These directives allow developers to offload data and execution to target accelerators such as GPUs. OpenMP offload is supported in the Intel® oneAPI HPC Toolkit with the Intel® C++ Compiler and the Intel® Fortran Compiler, and targets Intel® Graphics including Xe architecture from Intel.
Follow this learning path to get hands-on practice with OpenMP Offload Basics using a Jupyter* Notebook.
Objectives
Who is this for?
Developers will learn the basics of applying OpenMP offload directives to target GPUs.
What will I be able to do?
Practice the essential concepts and features of OpenMP offload with live sample code.
Modules
Introduction to OpenMP Offload
Articulate how oneAPI can help solve the challenges of programming in a heterogeneous world.
- Use oneAPI solutions to enable your workflows.
- Use OpenMP Offload directives to execute code on the GPU.
- Become familiar with using Jupyter Notebook for training throughout the course.
Manage Device Data
Use OpenMP constructs to effectively manage data transfers to and from the device.
- Create a device data environment and map data to it.
- Map global variables to OpenMP devices.
OpenMP Device Parallelism
- Explain basic GPU architecture.
- Use OpenMP offload work-sharing constructs to fully use the GPU.