Visible to Intel only — GUID: GUID-ACE038C0-306F-44FA-B325-971F4922A97C
C/C++ or Fortran with OpenMP* Offload Programming Model
The Intel® oneAPI DPC++/C++ Compiler and the Intel® Fortran Compiler enable software developers to use OpenMP* directives to offload work to Intel accelerators to improve the performance of applications.
This section describes the use of OpenMP directives to target computations to the accelerator. Developers unfamiliar with OpenMP directives can find basic usage information documented in the OpenMP Support sections of the Intel® oneAPI DPC++/C++ Compiler Developer Guide and Reference or Intel® Fortran Compiler Developer Guide and Reference.
Basic OpenMP Target Construct
The OpenMP target construct is used to transfer control from the host to the target device. Variables are mapped between the host and the target device. The host thread waits until the offloaded computations are complete. Other OpenMP tasks may be used for asynchronous execution on the host; use the nowait clause to specify that the encountering thread does not wait for the target region to complete.
C/C++
The C++ code snippet below targets a SAXPY computation to the accelerator.
#pragma omp target map(tofrom:fa), map(to:fb,a)
#pragma omp parallel for firstprivate(a)
for(k=0; k<FLOPS_ARRAY_SIZE; k++)
fa[k] = a * fa[k] + fb[k]
Array fa is mapped both to and from the accelerator since fa is both input to and output from the calculation. Array fb and the variable a are required as input to the calculation and are not modified, so there is no need to copy them out. The variable FLOPS_ARRAY_SIZE is implicitly mapped to the accelerator. The loop index k is implicitly private according to the OpenMP specification.
Fortran
This Fortran code snippet targets a matrix multiply to the accelerator.
!$omp target map(to: a, b ) map(tofrom: c )
!$omp parallel do private(j,i,k)
do j=1,n
do i=1,n
do k=1,n
c(i,j) = c(i,j) + a(i,k) * b(k,j)
enddo
enddo
enddo
!$omp end parallel do
!$omp end target
Arrays a and b are mapped to the accelerator, while array c is both input to and output from the accelerator. The variable n is implicitly mapped to the accelerator. The private clause is optional since loop indices are automatically private according to the OpenMP specification.
Map Variables
To optimize data sharing between the host and the accelerator, the target data directive maps variables to the accelerator and the variables remain in the target data region for the extent of that region. This feature is useful when mapping variables across multiple target regions.
C/C++
#pragma omp target data [clause[[,] clause],...]
structured-block
Fortran
!$omp target data [clause[[,] clause],...]
structured-block
!$omp end target data
Clauses
The clauses can be one or more of the following. See TARGET DATA for more information.
DEVICE (integer-expression)
IF ([TARGET DATA:] scalar-logical-expression)
MAP ([[map-type-modifier[,]] map-type: ] list)
NOTE:Map type can be one or more of the following:alloc
to
from
tofrom
delete
release
SUBDEVICE ([integer-constant ,] integer-expression [ : integer-expression [ : integer-expression]])
USE_DEVICE_ADDR (list) // available only in ifx
USE_DEVICE_PTR (ptr-list)
NOTE:The SUBDEVICE clause is ignored in the following cases:If ZE_FLAT_DEVICE_HIERARCHY is set to FLAT or COMBINED.
If env LIBOMPTARGET_DEVICES is set to SUBDEVICE/SUBSUBDEVICE
If env ONEAPI_DEVICE_SELECTOR is used to select devices.
DEVICE (integer-expression) IF ([TARGET DATA:] scalar-logical-expression) MAP ([[map-type-modifier[,]] map-type: alloc | to | from | tofrom | delete | release] list) SUBDEVICE ([integer-constant ,] integer-expression [ : integer-expression [ : integer-expression]]) USE_DEVICE_ADDR (list) // available only in ifx USE_DEVICE_PTR (ptr-list)
Use the target update directive or always map-type-modifier in map clause to synchronize an original variable in the host with the corresponding variable in the device.
Compile to Use OpenMP TARGET
The following example commands illustrate how to compile an application using OpenMP target.
C/C++
Linux:
icx -fiopenmp -fopenmp-targets=spir64 code.c
Windows (you can use icx or icpx):
icx /Qiopenmp /Qopenmp-targets=spir64 code.c
Fortran
Linux:
ifx -fiopenmp -fopenmp-targets=spir64 code.f90
Windows:
ifx /Qiopenmp /Qopenmp-targets=spir64 code.f90
Additional OpenMP Offload Resources
Intel offers code samples that demonstrate using OpenMP directives to target accelerators at https://github.com/oneapi-src/oneAPI-samples/tree/master/DirectProgramming. Specific samples include:
Matrix Multiplication is a simple program that multiplies together two large matrices and verifies the results. This program is implemented using two ways: SYCL* and OpenMP.
The ISO3DFD OpenMP Offload sample references three-dimensional finite-difference wave propagation in isotropic media. ISO3DFD is a three-dimensional stencil to simulate a wave propagating in a 3D isotropic medium and shows some common challenges and techniques when targeting OpenMP Offload devices in more complex applications to achieve good performance.
openmp_reduction is a simple program that calculates pi. This program is implemented using C++ and OpenMP for CPUs and accelerators based on Intel® Architecture.
LLVM/OpenMP Runtimes describes the distinct types of runtimes available and can be helpful when debugging OpenMP offload.
The oneAPI GPU Optimization Guide gives extensive tips for getting the best GPU performance for oneAPI programs.
Offload and Optimize OpenMP* Applications with Intel Tools describes how to use OpenMP* directives to add parallelism to your application.
openmp.org has an examples document: https://www.openmp.org/wp-content/uploads/openmp-examples-4.5.0.pdf. Chapter 4 of the examples document focuses on accelerator devices and the target construct.
There are a number of useful OpenMP books. See the listing at: https://www.openmp.org/resources/openmp-books
Details on using Intel compilers with OpenMP offload, including lists of supported options and example code, is available in the compiler developer guides: