Intel® oneAPI DPC++/C++ Compiler Developer Guide and Reference

ID 767253
Date 3/22/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Supported Environment Variables

You can customize your system environment by specifying paths where the compiler searches for certain files such as libraries, include files, configuration files, and certain settings.

Compiler Compile-Time Environment Variables

The following table shows the compile-time environment variables that affect the compiler:

Compile-Time Environment Variable

Description

CL (Windows)

_CL_ (Windows)

Define the files and options you use most often with the CL variable. Note: You cannot set the CL environment variable to a string that contains an equal sign. You can use the pound sign instead. In the following example, the pound sign (#) is used as a substitute for an equal sign in the assigned string: SET CL=/Dtest#100

ICXCFG

Specifies the configuration file for customizing compilations when invoking the compiler using icx. Used instead of the default configuration file.

ICPXCFG

Specifies the configuration file for customizing compilations when invoking the compiler using icpx. Used instead of the default configuration file.

__INTEL_PRE_CFLAGS

__INTEL_POST_CFLAGS

Specifies a set of compiler options to add to the compile line.

This is an extension to the facility already provided in the compiler configuration file icx.cfg.

You can insert command line options in the prefix position using __INTEL_PRE_CFLAGS , or in the suffix position using __INTEL_POST_CFLAGS. The command line is built as follows:

Syntax:icx <PRE flags> <flags from configuration file> <flags from the compiler invocation> <POST flags>

NOTE:
By default, a configuration file named icx.cfg (Windows, Linux), or icpx.cfg (Linux) is used. This file is in the same directory as the compiler executable. To use another configuration file in another location, you can use the ICXCFG (Windows, Linux), ICPXCFG (Linux) environment variable to assign the directory and file name for the configuration file.
NOTE:
The driver issues a warning that the compiler is overriding an option because of an environment variable, but only when you include the option /W5 (Windows) or -w3 (Linux).

PATH

Specifies the directories the system searches for binary executable files.

NOTE:
On Windows, this also affects the search for Dynamic Link Libraries (DLLs).

TMP

TMPDIR

TEMP

Specifies the location for temporary files. If none of these are specified, or writeable, or found, the compiler stores temporary files in /tmp (Linux) or the current directory (Windows).

The compiler searches for these variables in the following order: TMP, TMPDIR, and TEMP.

NOTE:

On Windows, these environment variables cannot be set from Visual Studio.

LD_LIBRARY_PATH (Linux)

Specifies the location for shared objects (.so files).

INCLUDE (Windows)

Specifies the directories for the source header files (include files).

LIB (Windows)

Specifies the directories for all libraries used by the compiler and linker.

GNU Environment Variables and Extensions

CPATH (Linux)

Specifies the path to include directory for C/C++ compilations.

C_INCLUDE_PATH (Linux)

Specifies path to include directory for C compilations.

CPLUS_INCLUDE_PATH (Linux)

Specifies path to include directory for C++ compilations.

DEPENDENCIES_OUTPUT (Linux)

Specifies how to output dependencies for make based on the non-system header files processed by the compiler. System header files are ignored in the dependency output.

GCC_EXEC_PREFIX (Linux)

Specifies alternative names for the linker (ld) and assembler (as).

LIBRARY_PATH (Linux)

Specifies the path for libraries to be used during the link phase.

SUNPRO_DEPENDENCIES (Linux)

This variable is the same as DEPENDENCIES_OUTPUT, except that system header files are not ignored.

Compiler Runtime Environment Variables

The following table summarizes compiler environment variables that are recognized at runtime.

Runtime Environment Variable

Description

GNU extensions (recognized by the Intel OpenMP* compatibility library)

GOMP_CPU_AFFINITY (Linux)

GNU extension recognized by the Intel OpenMP compatibility library. Specifies a list of OS processor IDs.

You must set this environment variable before the first parallel region or before certain API calls including omp_get_max_threads(), omp_get_num_procs() and any affinity API calls. For detailed information on this environment variable, see Thread Affinity Interface.

Default: Affinity is disabled

GOMP_STACKSIZE (Linux)

GNU extension recognized by the Intel OpenMP compatibility library. Same as OMP_STACKSIZE.KMP_STACKSIZE overrides GOMP_STACKSIZE, which overrides OMP_STACKSIZE.

Default: See the description for OMP_STACKSIZE.

OpenMP Environment Variables (OMP_) and Extensions (KMP_)

OMP_CANCELLATION

Activates cancellation of the innermost enclosing region of the type specified. If set to TRUE, the effects of the cancel construct and of cancellation points are enabled and cancellation is activated. If set to FALSE, cancellation is disabled and the cancel construct and cancellation points are effectively ignored.

NOTE:

Internal barrier code will work differently depending on whether the cancellation is enabled. Barrier code should repeatedly check the global flag to figure out if the cancellation had been triggered. If a thread observes the cancellation it should leave the barrier prematurely with the return value 1 (may wake up other threads). Otherwise, it should leave the barrier with the return value 0.

Enables (TRUE) or disables (FALSE) cancellation of the innermost enclosing region of the type specified.

Default: FALSE

Example: OMP_CANCELLATION=TRUE

OMP_DISPLAY_ENV

Enables (TRUE) or disables (FALSE) the printing to stderr of the OpenMP version number and the values associated with the OpenMP environment variable.

Possible values are: TRUE, FALSE, or VERBOSE.

Default: FALSE

Example: OMP_DISPLAY_ENV=TRUE

OMP_DEFAULT_DEVICE

Sets the device that will be used in a target region. The OpenMP routine omp_set_default_device or a device clause in a target pragma can override this variable.

If no device with the specified device number exists, the code is executed on the host. If this environment variable is not set, device number 0 is used.

OMP_DYNAMIC

Enables (TRUE) or disables (FALSE) the dynamic adjustment of the number of threads.

Default:

  • TRUE: When the environment variable TCM_ENABLE=1 and the Thread Composability Manager library is available.
  • FALSE: In all other cases.

Example: OMP_DYNAMIC=TRUE

OMP_MAX_ACTIVE_LEVELS

The maximum number of levels of parallel nesting for the program.

Possible values: Non-negative integer.

Default: 1

OMP_NESTED

Deprecated; use OMP_MAX_ACTIVE_LEVELS instead.

OMP_NUM_THREADS

Sets the maximum number of threads to use for OpenMP parallel regions if no other value is specified in the application.

The value can be a single integer, in which case it specifies the number of threads for all parallel regions. The value can also be a comma-separated list of integers, in which case each integer specifies the number of threads for a parallel region at a nesting level.

The first position in the list represents the outer-most parallel nesting level, the second position represents the next-inner parallel nesting level, and so on. At any level, the integer can be left out of the list. If any level of nesting does not have a value, it should be comma separated. If the first integer in a list is left out, it implies the normal default value for threads is used at the outer-most level. If the integer is left out of any other level, the number of threads for that level is inherited from the previous level.

This environment variable applies to the options [q or Q]openmp and [Q]parallel.

Default: The number of processors visible to the operating system on which the program is executed.

Syntax: OMP_NUM_THREADS=value[,value]*

OMP_PLACES

Specifies an explicit ordered list of places, either as an abstract name describing a set of places or as an explicit list of places described by nonnegative numbers. An exclusion operator “!” can also be used to exclude the number or place immediately following the operator.

For explicit lists, the meaning of the numbers and how the numbering is done for a list of nonnegative numbers are implementation defined. Generally, the numbers represent the smallest unit of execution exposed by the execution environment, typically a hardware thread.

Intervals can be specified using the <lower-bound> : <length> : <stride> notation to represent the following list of numbers:

"<lower-bound>, <lower-bound> + <stride>, ..., 
<lower-bound> +(<length>-1)*<stride>."
When <stride> is omitted, a unit stride is assumed. Intervals can specify numbers within a place as well as sequences of places.
# EXPLICIT LIST EXAMPLE
setenv OMP_PLACES "{0,1,2,3},{4,5,6,7},{8,9,10,11},{12,13,14,15}"
setenv OMP_PLACES "{0:4},{4:4},{8:4},{12:4}"
setenv OMP_PLACES "{0:4}:4:4"

The abstract names listed below should be understood by the execution and runtime environment:

  • threads: Each place corresponds to a single hardware thread on the target machine.
  • cores: Each place corresponds to a single core (having one or more hardware threads) on the target machine.
  • ll_caches: Each place corresponds to a set of cores that share the last level cache on the device.
  • numa_domains: Each place corresponds to a set of cores for which their closest memory on the device is the same memory and at a similar distance from the cores.
  • sockets: Each place corresponds to a single socket (consisting of one or more cores) on the target machine.

Depending on the runtime environment and machine topology, certain topology layers may also be available from the following abstract names:

  • dice: Each place corresponds to a single die (consisting of one or more cores) on the target machine.
  • modules: Each place corresponds to a single module (consisting of one or more cores) on the target machine.
  • tiles: Each place corresponds to a single tile (consisting of one or more cores) on the target machine.
  • l1_caches: Each place corresponds to a single L1 cache (consisting of one or more cores) on the target machine.
  • l2_caches: Each place corresponds to a single L2 cache (consisting of one or more cores) on the target machine.
  • l3_caches:Each place corresponds to a single L3 cache (consisting of one or more cores) on the target machine.

If Intel® Hybrid Technology is available in the machine topology, certain topology layers with attributes may also be available from the following abstract names:

  • cores:<attribute>: Where <attribute> can be one of the following:
    • Core type: Either intel_atom or intel_core
    • Core efficiency: Specified as effnum where num is a number from 0 to the number of core efficiencies detected in the machine topology minus one. Examples:
      • OMP_PLACES=cores:intel_core
      • OMP_PLACES=cores:eff1

When requesting fewer places or more resources than available on the system, the determination of which resources of type abstract_name are to be included in the place list is implementation-defined. The precise definitions of the abstract names are implementation defined. An implementation may also add abstract names as appropriate for the target platform. The abstract name may be appended by a positive number in parentheses to denote the length of the place list to be created, that is abstract_name(num-places).

# ABSTRACT NAMES EXAMPLE
  setenv OMP_PLACES threads
  setenv OMP_PLACES threads(4)

NOTE:

If any numerical values cannot be mapped to a processor on the target platform the behavior is implementation-defined. The behavior is also implementation-defined when the OMP_PLACES environment variable is defined using an abstract name.

OMP_PROC_BIND (Windows, Linux)

Sets the thread affinity policy to be used for parallel regions at the corresponding nested level. Enables (TRUE) or disables (FALSE) the binding of threads to processor contexts. If enabled, this is the same as specifying KMP_AFFINITY=scatter. If disabled, this is the same as specifying KMP_AFFINITY=none.

Acceptable values: TRUE, FALSE, or a comma separated list, each element of which is one of the following values: PRIMARY, MASTER (deprecated), CLOSE, SPREAD.

Default: FALSE

If set to FALSE, the execution environment may move OpenMP threads between OpenMP places, thread affinity is disabled, and proc_bind clauses on parallel constructs are ignored. Otherwise, the execution environment should not move OpenMP threads between OpenMP places, thread affinity is enabled, and the initial thread is bound to the first place in the OpenMP place list.

If set to PRIMARY, all threads are bound to the same place as the primary thread. If set to CLOSE, threads are bound to successive places, close to where the primary thread is bound. If set to SPREAD, the primary thread's partition is subdivided and threads are bound to single place successive sub-partitions.

NOTE:

KMP_AFFINITY takes precedence over GOMP_CPU_AFFINITY and OMP_PROC_BIND. GOMP_CPU_AFFINITY takes precedence over OMP_PROC_BIND.

OMP_SCHEDULE

Sets the runtime schedule type and an optional chunk size.

Default: static, no chunk size specified

Example syntax: OMP_SCHEDULE="[modifier:]kind[,chunk_size]" where

  • modifier is one of monotonic or nonmonotonic
  • kind is one of static, dynamic, guided, or auto
  • chunk_size is a positive integer

OMP_STACKSIZE

Sets the number of bytes to allocate for each OpenMP thread to use as the private stack for the thread. Recommended size is 16M.

Use the optional suffixes to specify byte units: B (bytes), K (Kilobytes), M (Megabytes), G (Gigabytes), or T (Terabytes) to specify the units. If you specify a value without a suffix, the byte unit is assumed to be K (Kilobytes).

This variable does not affect the native operating system threads created by the user program, or the thread executing the sequential part of an OpenMP program.

The kmp_{set,get}_stacksize_s() routines set/retrieve the value. The kmp_set_stacksize_s() routine must be called from sequential part, before first parallel region is created. Otherwise, calling kmp_set_stacksize_s() has no effect.

Default (Intel® 64 architecture): 4M

Related environment variables: KMP_STACKSIZE (overrides OMP_STACKSIZE).

Syntax: OMP_STACKSIZE=value

OMP_THREAD_LIMIT

Limits the number of simultaneously-executing threads in an OpenMP program.

If this limit is reached and another native operating system thread encounters OpenMP API calls or constructs, the program can abort with an error message. If this limit is reached when an OpenMP parallel region begins, a one-time warning message might be generated indicating that the number of threads in the team was reduced, but the program will continue.

This environment variable is only used for programs compiled with the following options: [q or Q]openmp and [Q]parallel.

The omp_get_thread_limit() routine returns the value of the limit.

Default: No enforced limit

Related environment variable: KMP_ALL_THREADS (overrides OMP_THREAD_LIMIT).

Example syntax: OMP_THREAD_LIMIT=value

OMP_WAIT_POLICY

Decides whether threads spin (active) or yield (passive) while they are waiting.

OMP_WAIT_POLICY=ACTIVE is an alias for KMP_LIBRARY=turnaround, and OMP_WAIT_POLICY=PASSIVE is an alias for KMP_LIBRARY=throughput.

Default: Passive

Syntax: OMP_WAIT_POLICY=value

OMP_DISPLAY_AFFINITY

Instructs the runtime to display formatted affinity information for all OpenMP threads in the parallel region upon entering the first parallel region and when any change occurs in the information accessible by the format specifiers listed in the OMP_AFFINITY_FORMAT entry.

Possible values: TRUE or FALSE

Default: FALSE

OMP_AFFINITY_FORMAT

Defines the format when displaying OpenMP thread affinity information. Possible values are any string with the following format field available:

  • %t or %{team_num}: Value returned by omp_get_team_num()
  • %T or %{num_teams}: Value returned by omp_get_num_teams()
  • %L or %{nesting_level}: Value returned by omp_get_level()
  • %n or %{thread_num}: Value returned by omp_get_thread_num()
  • %a or %{ancestor_tnum}: Value returned by omp_get_ancestor_thread_num(omp_get_level() – 1)
  • %H or %{host}: Name of host device
  • %P or %{process_id}: Process ID
  • %i or %{native_thread_id}: Native thread ID on the platform
  • %A or %{thread_affinity}: List of processor ID on which a thread may execute

Default: 'OMP: pid %P tid %i thread %n bound to OS proc set {%A}'

OMP_MAX_TASK_PRIORITY

Controls the use of task priorities by setting the initial value.

Possible values: Non-negative integer.

Default: 0

OMP_TOOL

Controls whether the OpenMP runtime will try to register a first party tool that uses OMPT interface.

Possible values: ENABLED or DISABLED.

Default: ENABLED

NOTE:
Only the host OpenMP runtime is supported.

OMP_TOOL_LIBRARIES

Sets a list of first-party tool locations that use the OMPT interface. The list enumerates names of dynamically-loadable libraries with OS-specific path separator.

Default: Empty

NOTE:
Only the host OpenMP runtime is supported.

OMP_TOOL_VERBOSE_INIT

Controls whether the OpenMP runtime will verbosely log the registration of a tool that uses the OMPT interface.

Possible values:

  • DISABLED: Do not log the registration.
  • STDOUT: Log the registration to stdout.
  • STDERR: Log the registration to stderr.
  • File_Name: Log the registration to the location specified by File_Name.

Default: DISABLED

NOTE:
Only the host OpenMP runtime is supported.

OMP_DEBUG

Controls whether the OpenMP runtime collects information that an OMPD library may need to support a tool.

Possible values: ENABLED or DISABLED.

Default: DISABLED

NOTE:
Only the host OpenMP runtime is supported.

OMP_ALLOCATOR

Specifies the default allocator for allocation calls, directives, and clauses that do not specify an allocator.

Default: omp_default_mem_alloc

Syntax: <PredefinedMemAllocator> | <PredefinedMemSpace> | <PredefinedMemSpace>:<Traits>

Currently supported values for <PredefinedMemAllocator> and <PredefinedMemSpace> :

  • omp_default_mem_alloc and omp_default_mem_space

Additional values are supported if libmemkind is available and there is system support for it:

  • omp_high_bw_mem_alloc and omp_high_bw_mem_space
  • omp_large_cap_mem_alloc and omp_large_cap_mem_space

Refer to the OpenMP specification for more information.

OMP_NUM_TEAMS

Sets the maximum number of teams created by a teams construct by setting nteams-var ICV.

Possible values: Positive integer.

Default: 1

OMP_TEAMS_THREAD_LIMIT

Sets the maximum number of OpenMP threads to use in each team created by a teams construct.

Possible values: Positive integer.

Default: <NumberOfProcessors> / <nteams-var ICV>

KMP_AFFINITY (Linux, Windows)

Enables runtime library to bind threads to physical processing units.

You must set this environment variable before the first parallel region, or certain API calls including omp_get_max_threads(), omp_get_num_procs() and any affinity API calls. For detailed information on this environment variable, see Thread Affinity Interface.

Default: noverbose,warnings,noreset,respect,granularity=core,none

Default (Windows with multiple processor groups): noverbose,warnings,noreset,norespect,granularity=group,compact,0,0

NOTE:
On Windows with multiple processor groups, the norespect affinity modifier is assumed when the process affinity mask equals a single processor group (which is default on Windows). Otherwise, the respect affinity modifier is used.

KMP_HIDDEN_HELPER_AFFINITY (Linux only)

Enables runtime library to bind hidden helper threads to physical processing units.

You must set this environment variable before the first hidden helper task, parallel region, or certain API calls including omp_get_max_threads() , omp_get_num_procs() and any affinity API calls. For detailed information on this environment variable, see Thread Affinity Interface.

The syntax of this environment variable is equivalent to KMP_AFFINITY except that reset/noreset and respect/norespect modifiers are not available for this environment variable.

Default: noverbose,warnings,granularity=core,none

KMP_ALL_THREADS

Limits the number of simultaneously-executing threads in an OpenMP program. If this limit is reached and another native operating system thread encounters OpenMP API calls or constructs, then the program may abort with an error message. If this limit is reached at the time an OpenMP parallel region begins, a one-time warning message may be generated indicating that the number of threads in the team was reduced, but the program will continue execution.

This environment variable is only used for programs compiled with the [q or Q]openmp compiler option.

Default: No enforced limit.

KMP_BLOCKTIME

Sets the time that a thread should busy-wait after completing execution of a parallel region before going to sleep.

Use the optional character suffixes: us (microseconds) or ms (milliseconds) to specify the units.

When no character suffix is specified, milliseconds are assumed.

Specify infinite for an unlimited wait time.

Default:

  • When Intel® Hybrid Technology is detected, 0 milliseconds
  • In all other cases, 200 milliseconds

Related Environment Variable: KMP_LIBRARY environment variable.

KMP_CPUINFO_FILE

Specifies an alternate file name for a file containing the machine topology description. The file must be in the same format as /proc/cpuinfo.

Default: None

KMP_DETERMINISTIC_REDUCTION

Enables (TRUE) or disables (FALSE) the use of a specific ordering of the reduction operations for implementing the reduction clause for an OpenMP parallel region. This has the effect that, for a given number of threads, in a given parallel region, for a given data set and reduction operation, a floating point reduction done for an OpenMP reduction clause has a consistent floating point result from run to run, since round-off errors are identical.

NOTE:
When compiling, you must set the following flag to ensure correct behavior:
  • -fp-model precise (Linux)
  • -fp:precise (Windows)

Default: FALSE

KMP_DYNAMIC_MODE

Selects the method used to determine the number of threads to use for a parallel region when OMP_DYNAMIC=TRUE. Possible values:

  • tcm: Requests threads from the Thread Composability Manager.

  • load_balance: Tries to avoid using more threads than available execution units on the machine.

  • thread_limit: Tries to avoid using more threads than total execution units on the machine.

Default (Intel® 64 architecture):

  • When the Thread Composability Manager library is available, use tcm.
  • In all other cases, use thread_limit.

KMP_HOT_TEAMS_MAX_LEVEL

Sets the maximum nested level to which teams of threads will be hot.

NOTE:

A hot team is a team of threads optimized for faster reuse by subsequent parallel regions. In a hot team, threads are kept ready for execution of the next parallel region, in contrast to the cold team, which is freed after each parallel region, with its threads going into a common pool of threads.

For values of 2 and above, nested parallelism should be enabled.

Default: 1

KMP_HOT_TEAMS_MODE

Specifies the runtime behavior when the number of threads in a hot team is reduced.

Possible values:

  • 0: Extra threads are freed and put into a common pool of threads.

  • 1: Extra threads are kept in the team in reserve, for faster reuse in subsequent parallel regions.

Default: 0

KMP_HW_SUBSET

Specifies the subset of available hardware resources for the hardware topology hierarchy.

The subset is specified in terms of number of units per upper layer unit starting from top layer downwards. For example, it can specify the number of sockets (top layer units), cores per socket, and the threads per core, to use with an OpenMP application. It is a convenient alternative to writing complicated explicit affinity settings or a limiting process affinity mask.

You can also specify an offset value to set which resources to use. When available, you can specify attributes to select different subsets of resources.

An extended syntax is available when KMP_TOPOLOGY_METHOD=hwloc. Depending on what resources are detected, you may be able to specify additional resources, such as NUMA nodes and groups of hardware resources that share certain cache levels.

Basic syntax:

[:][num_units]ID[@offset][:attribute] [,[num_units]ID[@offset][:attribute]...]

where

  • An optional colon (:) can be specified at the beginning of the syntax to specify an explicit hardware subset. The default is an implicit hardware subset.
  • num_units is either a positive integer, which requests an exact number of resources, or an asterisk (*), which means using all available resources at that layer (for example, using all cores per socket). If num_units is not specified, the asterisk (*) semantics are assumed.
  • ID is a supported ID:
    S - socket
    num_units specifies the requested number of sockets.
    D - die
    num_units specifies the requested number of dies per socket.
    C - core
    num_units specifies the requested number of cores per die - if any - otherwise, per socket.
    T - thread
    num_units specifies the requested number of HW threads per core.

    Supported unit IDs are not case-sensitive.

  • offset is the number of units to skip (optional).
  • attribute is an attribute differentiating resources at a particular layer (optional).

    This is only available for the core layer on machines with Intel® Hybrid Technology. The attributes available to users are:

    • Core type: Either intel_atom or intel_core
    • Core efficiency: Specified as effnum where num is a number from 0 to the number of core efficiencies detected in the machine topology minus one. For example: eff0. The greater the efficiency number, the more performant the core. There may be more core efficiencies than core types, which can be viewed by setting KMP_AFFINITY=verbose.
NOTE:
The hardware cache can be specified as a unit, for example L2 for L2 cache, or LL for last level cache.

Extended syntax when KMP_TOPOLOGY_METHOD=hwloc:

Additional IDs can be specified if detected. For example:

N - numa
num_units specifies the requested number of NUMA nodes per upper layer unit, e.g. per socket.
TI - tile
num_units specifies the requested number of tiles to use per upper layer unit, e.g. per NUMA node.

When any numa or tile units are specified in KMP_HW_SUBSET, the KMP_TOPOLOGY_METHOD will be automatically set to hwloc, so there is no need to set it explicitly.

For an explicit hardware subset, if one or more topology layers detected by the runtime are omitted from the subset, then those topology layers are ignored. Only explicitly specified topology layers are used in the subset.

For an implicit hardware subset, it is implied that the socket, core, and thread topology types should be included in the subset. Other topology layers are not implicitly included and are ignored if they are not specified in the subset. Because the socket, core and thread topology types are always included in implicit hardware subsets, when they are omitted, it is assumed that all available resources of that type should be used. Implicit hardware subsets are the default.

The runtime library prints a warning, and the setting of KMP_HW_SUBSET is ignored if:

  • a resource is specified, but detection of that resource is not supported by the chosen topology detection method and/or
  • a resource is specified twice. An exception to this condition is if attributes differentiate the resource.
  • attributes are used when unavailable, not detected in the machine topology, or conflict with each other.

This variable does not work if the OpenMP affinity is set to disabled.

Default: If omitted, the default value is to use all the available hardware resources.

Implicit Hardware Subset Examples:

  • 2s,4c,2t: Use the first 2 sockets (s0 and s1), the first 4 cores on each socket (c0 - c3), and the first 2 threads per core.

  • 2s@2,4c@8,2t: Skip the first 2 sockets (s0 and s1) and use the next 2 sockets (s2-s3), skip the first 8 cores (c0-c7) and use the next 4 cores on each socket (c8-c11), and use the first 2 threads per core.

  • 5C@1,3T: Use all available sockets, skip the first core and use the next 5 cores, and use the first 3 threads per core.

  • 1T: Use all cores on all sockets, 1 thread per core.

  • 1s, 1d, 1n, 1c, 1t: Use 1 socket, 1 die per socket, 1 NUMA node per die, 1 core per NUMA mode, 1 thread per core - use a single hardware thread as a result.

  • 4c:intel_atom,5c:intel_core: Use all available sockets and use the first 4 Intel Atom® processor cores and the first 5 Intel® Core™ processor cores per socket.

  • 2c:eff0,3c:eff1: Use all available sockets and use the first 2 cores with efficiency 0 and the first 3 cores with efficiency 1 per socket.

Explicit Hardware Subset Examples:

  • :2s,6t Use exactly the first two sockets and 6 threads per socket.
  • :1t@7 Skip the first 7 threads (t0-t6) and use exactly one thread (t7).
  • :5c,1t Use exactly the first 5 cores (c0-c4) and the first thread on each core.

To see the result of the setting, you can specify the verbose modifier in the KMP_AFFINITY environment variable.

The OpenMP runtime library will output to stderr stream the information about discovered HW topology before and after the KMP_HW_SUBSET setting was applied. For example, on Intel® Xeon Phi™ 7210 CPU in SNC-4 Clustering Mode, the setting KMP_AFFINITY=verbose KMP_HW_SUBSET=1N,1L2,1L1,1T outputs various verbose information to stderr, including the following lines about discovered HW topology before and after KMP_HW_SUBSET was applied:

  • Info #191: KMP_AFFINITY: 1 socket x 4 NUMA domains/socket x 8 tiles/NUMA domain x 2 cores/tile x 4 threads/core. (64 total cores)
  • Info #191: KMP_HW_SUBSET 1 socket x 1 NUMA domain/socket x 1 tile/NUMA domain x 1 core/tile x 1 thread/core (1 total cores)

KMP_INHERIT_FP_CONTROL

Enables (TRUE) or disables (FALSE) the copying of the floating-point control settings of the primary thread to the floating-point control settings of the OpenMP worker threads at the start of each parallel region.

Default: TRUE

KMP_LIBRARY

Selects the OpenMP runtime library execution mode. The values for this variable are serial, turnaround, or throughput.

Default: throughput

KMP_PLACE_THREADS

Deprecated; use KMP_HW_SUBSET instead.

KMP_SETTINGS

Enables (TRUE) or disables (FALSE) the printing of OpenMP runtime library environment variables during program execution. Two lists of variables are printed: user-defined environment variables settings and effective values of variables used by OpenMP runtime library.

Default: FALSE

KMP_STACKSIZE

Sets the number of bytes to allocate for each OpenMP thread to use as its private stack.

Recommended size is 16m.

Use the optional suffixes to specify byte units: B (bytes), K (Kilobytes), M (Megabytes), G (Gigabytes), or T (Terabytes) to specify the units. If you specify a value without a suffix, the byte unit is assumed to be K (Kilobytes).

KMP_STACKSIZE overrides GOMP_STACKSIZE, which overrides OMP_STACKSIZE.

Default (Intel® 64 architecture): 4m

KMP_TOPOLOGY_METHOD

Forces OpenMP to use a particular machine topology modeling method.

Possible values are:

  • all: Lets OpenMP choose which topology method is most appropriate based on the platform and possibly other environment variable settings.

  • cpuid_leaf31: Decodes the APIC identifiers as specified by leaf 31 of the cpuid instruction.
  • cpuid_leaf11: Decodes the APIC identifiers as specified by leaf 11 of the cpuid instruction.

  • cpuid_leaf4: Decodes the APIC identifiers as specified in leaf 4 of the cpuid instruction.

  • cpuinfo: If KMP_CPUINFO_FILE is not specified, forces OpenMP to parse /proc/cpuinfo to determine the topology (Linux only). If KMP_CPUINFO_FILE is specified as described above, uses it (Windows or Linux).

  • group: Models the machine as a 2-level map, with level 0 specifying the different processors in a group, and level 1 specifying the different groups (Windows 64-bit only) .

    NOTE:

    Support for group is now deprecated and will be removed in a future release. Use all instead.

  • flat: Models the machine as a flat (linear) list of processors.

  • hwloc: Models the machine as the Portable Hardware Locality* (hwloc) library does. This model is the most detailed and includes, but is not limited to: numa nodes, packages, cores, hardware threads, caches, and Windows processor groups.

Default: all

KMP_USER_LEVEL_MWAIT

Enables (TRUE) or disables (FALSE) the use of user-level mwait as alternative to putting waiting threads to sleep, if available, either from ring3 or WAITPKG.

Default: FALSE

KMP_VERSION

Enables (TRUE) or disables (FALSE) the printing of OpenMP runtime library version information during program execution.

Default: FALSE

KMP_WARNINGS

Enables (TRUE) or disables (FALSE) displaying warnings from the OpenMP runtime library during program execution.

Default: TRUE

OpenMP Offload Environment Variables (OMP_, LIBOMPTARGET)

OMP_TARGET_OFFLOAD

Controls the program behavior when offloading a target region.

Possible values:

  • MANDATORY: Program execution is terminated if a device construct or device memory routine is encountered and the device is not available or is not supported.

  • DISABLED: Disables target offloading to devices and execution occurs on the host.

  • DEFAULT: Target offloading is enabled if the device is available and supported.

Default: DEFAULT

LIBOMPTARGET_DEBUG

Controls whether debugging information will be displayed from the offload runtime.

Possible values:

  • 0: Disabled.

  • 1: Displays basic debug information from the plugin actions such as device detection, kernel compilation, memory copy operations, kernel invocations, and other plugin-dependent actions.

  • 2: Displays which GPU runtime API functions are invoked with which arguments and parameters in addition to the information displayed with value 1.

Default: 0

LIBOMPTARGET_INFO

Controls whether basic offloading information will be displayed from the offload runtime.

Possible values:

  • 0: Disabled.

  • 1: Prints all data arguments upon entering an OpenMP device kernel.

  • 2: Indicates when a mapped address already exists in the device mapping table.

  • 4: Dump the contents of the device pointer map if target offloading fails.

  • 8: Indicates when an entry is changed in the device mapping table.

  • 32: Indicates when data is copied to and from the device.

Default: 0

LIBOMPTARGET_PLUGIN

Specifies which offload plugin is used when offloading a target region.

Possible values:

  • LEVEL_ZERO | LEVEL0 | level_zero | level0: Uses Intel® oneAPI Level Zero (Level Zero) offload plugin.

  • OPENCL | opencl: Uses OpenCL offload plugin.

  • X86_64 | x86_64: Uses X86_64 plugin.

Default: LEVEL_ZERO

LIBOMPTARGET_DEVICETYPE

Selects device type to which a target region is offloaded.

Possible values:

  • GPU | gpu: GPU device is used.

  • CPU | cpu: CPU device is used.

Offload plugin support for device type:

  • Level Zero offload plugin only supports GPU type.

  • OpenCL offload plugin supports both GPU and CPU types.

  • X86_64 offload plugin ignores this variable.

Default: GPU

LIBOMPTARGET_PLUGIN_PROFILE

Enables basic plugin profiling and displays the result when program finishes.

Default: Disabled

Syntax: <Value>[,usec], where <Value>=1 | T | t

The unit of reported time is microsecond if “,usec” is appended, millisecond otherwise.

LIBOMPTARGET_DYNAMIC_MEMORY_SIZE

Sets the size of preallocated memory in MB to service in-kernel malloc calls on the device.

Possible values: Non-negative integer.

Default: 1

OpenMP Offload Environment Variables for Level Zero Offload Plugin

LIBOMPTARGET_LEVEL_ZERO_COMPILATION_OPTIONS

Passes extra build options when building native target program binaries.

Possible values: Valid Level Zero build options.

LIBOMPTARGET_LEVEL0_COMPILATION_OPTIONS

Deprecated. Use LIBOMPTARGET_LEVEL_ZERO_COMPILATION_OPTIONS instead.

LIBOMPTARGET_DEVICES

Controls how subdevices or sub-subdevices are exposed to users if device supports subdevices.

Possible values:

  • DEVICE | device: Only top-level devices are reported as OpenMP devices and subdevice clause is supported.

  • SUBDEVICE | subdevice: Only first-level subdevices are reported as OpenMP devices and subdevice clause is ignored.

  • SUBSUBDEVICE | subsubdevice: Only second-level subdevices are reported as OpenMP devices and subdevice clause is ignored.

  • ALL | all: All devices and subdevices are reported as OpenMP devices and subdevice clause is ignored.

Default: DEVICE

LIBOMPTARGET_LEVEL_ZERO_MEMORY_POOL

Controls memory pool configuration.

Possible values:

  • 0 : Disables using memory pool.

  • <PoolInfoList>=<PoolInfo>[,<PoolInfoList>]

    <PoolInfo>=<MemType>[,<AllocMax>[,<Capacity>[,<PoolSize>]]]

    <MemType>=all | device | host | shared

    <AllocMax> is a positive integer or empty

    <Capacity> is a positive integer or empty

    <PoolSize> is a positive integer or empty

    Controls how reusable memory pool is configured. Pool is a list of memory blocks that can serve at least <Capacity> allocations of up to <AllocMax> size from a single block, with total size not exceeding <PoolSize>.

    When <PoolInfoList> only contains a subset of {device, host, shared} configurations, the default configurations are used for the unspecified memory types, and memory pool for a specific memory type can be disabled by specifying 0 for <AllocMax> of the memory type.

Examples:

  • all,2,8,1024: Enables memory pool for all memory types which can allocate up to eight 2MB blocks from a single block allocated from Level Zero with 1GB total pool size allowed.

  • device,1,4,512: Enables memory pool for device memory type which can allocate up to four 1MB blocks from a single block allocated from Level Zero with 512MB total pool size allowed. The default configuration controls allocation from other memory types.

Default: Equivalent to device,1,4,256,host,1,4,256,shared,8,4,256

LIBOMPTARGET_LEVEL0_MEMORY_POOL

Deprecated. Use LIBOMPTARGET_LEVEL_ZERO_MEMORY_POOL instead.

LIBOMPTARGET_LEVEL_ZERO_USE_COPY_ENGINE

Controls how to use copy engines for data transfer if the device supports them.

Possible values:

  • 0 | F | f: Disables use of copy engines.

  • main: Enables only main copy engines if the device supports it.

  • link: Enables only link copy engines if the device supports it.

  • all: Enables all copy engines if the device supports it.

Default: all

LIBOMPTARGET_LEVEL0_USE_COPY_ENGINE

Deprecated. Use LIBOMPTARGET_LEVEL_ZERO_USE_COPY_ENGINE instead.

LIBOMPTARGET_LEVEL_ZERO_DEFAULT_TARGET_MEM

Selects memory type returned by the omp_target_alloc routine.

Possible values:

  • DEVICE | device: Returned memory type is device type. Device owns the memory and data movement is explicit.

  • SHARED | shared: Returned memory type is shared type. Ownership of the memory is shared between host and device, and data movement is implicit.

  • HOST | host: Returned memory type is host type. Host owns the memory and data movement is implicit.

Default: DEVICE

LIBOMPTARGET_LEVEL0_DEFAULT_TARGET_MEM

Deprecated. Use LIBOMPTARGET_LEVEL_ZERO_DEFAULT_TARGET_MEM instead.

LIBOMPTARGET_LEVEL_ZERO_STAGING_BUFFER_SIZE

Sets the staging buffer size in KB. Staging buffer is used in copy operations between host and device as a temporary storage for a two-step copy operation. The buffer is only used for discrete devices.

Possible values: Non-negative integers where 0 disables use of staging buffer.

Default: 16

LIBOMPTARGET_LEVEL0_STAGING_BUFFER_SIZE

Deprecated. Use LIBOMPTARGET_LEVEL_ZERO_STAGING_BUFFER_SIZE instead.

LIBOMPTARGET_LEVEL_ZERO_USE_IMMEDIATE_COMMAND_LIST

Enables or disables using immediate command list for computation and/or memory copy operations.

Possible values:

  • 0 | F | f: Disable.

  • compute: Enable only for computation.

  • copy: Enable only for copy operation.

  • all: Enable for computation and copy operation.

Default: all for XeHPC devices, 0 otherwise

LIBOMPTARGET_LEVEL_ZERO_COMMAND_MODE

Determines how each command in a target region is executed when immediate command lists are fully enabled by setting LIBOMPTARGET_LEVEL_ZERO_USE_IMMEDIATE_COMMAND_LIST=all.

This variable has no effect on integrated devices.

Possible values:

  • sync: Host waits for completion of the current submitted command.

  • async: Host does not wait for completion of the command and synchronization occurs later when it is required.

  • async_ordered: Same as async, but command execution is ordered.

Default:async

OpenMP Offload Environment Variables for OpenCL Offload Plugin

LIBOMPTARGET_OPENCL_COMPILATION_OPTIONS

Passes extra compilation options when compiling target programs from SPIRV target images.

Possible values: Valid OpenCL compilation options.

LIBOMPTARGET_OPENCL_LINKING_OPTIONS

Passes extra linking options when linking target programs.

Possible values: Valid OpenCL linking options.

OpenCL ICD Loader Environment Variables for OpenCL Backend

OCL_ICD_ENABLE_TRACE

Enables (TRUE) or disables (FALSE) the trace mechanism in the OpenCL Installable Client Driver (ICD) loader. The possible values are:

  • OCL_ICD_ENABLE_TRACE=T
  • OCL_ICD_ENABLE_TRACE=1
  • OCL_ICD_ENABLE_TRACE=True

Default: FALSE

DPC++ Environment Variables

DPCPP_CPU_CU_AFFINITY

Set thread affinity to CPU. The value and meaning is the following:

  • close - threads are pinned to CPU cores successively through available cores.
  • spread - threads are spread to available cores.
  • master - threads are put in the same cores as master. If DPCPP_CPU_CU_AFFINITY is set, master thread is pinned as well, otherwise master thread is not pinned

This environment variable is similar to the OMP_PROC_BIND variable used by OpenMP.

Default: Not set

DPCPP_CPU_NUM_CUS

Set the numbers threads used for kernel execution.

To avoid over subscription, maximum value of DPCPP_CPU_NUM_CUS should be the number of hardware threads. If DPCPP_CPU_NUM_CUS is 1, all the workgroups are executed sequentially by a single thread and this is useful for debugging.

This environment variable is similar to OMP_NUM_THREADS variable used by OpenMP.

Default: Not set. Determined by Intel® oneAPI Threading Building Blocks (oneTBB).

DPCPP_CPU_PLACES

Specify the places that affinities are set. The value is { sockets | numa_domains | cores | threads }.

This environment variable is similar to the OMP_PLACES variable used by OpenMP.

If value is numa_domains, oneTBB NUMA API will be used. This is analogous to OMP_PLACES=numa_domains in the OpenMP 5.1 Specification. oneTBB task arena is bound to numa node and SYCL nd range is uniformly distributed to task arenas.

DPCPP_CPU_PLACES is suggested to be used together with DPCPP_CPU_CU_AFFINITY.

Default: cores

DPCPP_CPU_SCHEDULE

Specify the algorithm for scheduling work-groups by the scheduler. Currently, DPC++ uses oneTBB for scheduling when using the OpenCL CPU driver. The value selects the petitioner used by the oneTBB scheduler. The value and meaning is the following:

  • dynamic - oneTBB auto_partitioner. It performs sufficient splitting to balance load.
  • affinity - oneTBB affinity_partitioner. It improves auto_partitioner's cache affinity by its choice of mapping subranges to worker threads compared to
  • static - oneTBB static_partitioner. It distributes range iterations among worker threads as uniformly as possible. oneTBB partitioner relies grain-size to control chunking. Grain-size is 1 by default, indicating every work-group can be executed independently.

Default: dynamic

The following table summarizes CPU environment variables that are recognized at runtime.

Runtime Configuration

Default Value

Description

CL_CONFIG_CPU_FORCE_PRIVATE_MEM_SIZE

32KB

Forces CL_DEVICE_PRIVATE_MEM_SIZE for the CPU device to be the given value. The value must include the unit; for example: 8MB, 8192KB, 8388608B.

NOTE:
You must compile your host application with sufficient stack size.

CL_CONFIG_CPU_FORCE_LOCAL_MEM_SIZE

32KB

Forces CL_DEVICE_LOCAL_MEM_SIZE for CPU device to be the given value. The value needs to be set with size including units, examples: 8MB, 8192KB, 8388608B.

NOTE:
You must compile your host application with sufficient stack size. Our recommendation is to set the stack size equal to twice the local memory size to cover possible application and OpenCL Runtime overheads.

CL_CONFIG_CPU_EXPENSIVE_MEM_OPT

0

A bitmap indicating enabled expensive memory optimizations. These optimizations may lead to more JIT compilation time, but give some performance benefit.

NOTE:
Currently, only the least significant bit is available.

Available bits:

  • 0: OpenCL address space alias analysis

CL_CONFIG_CPU_STREAMING_ALWAYS

False

Controls whether non-temporal instructions are used.

Controlling DPC++ Runtime

Environment Variable

Default Value

Description

ONEAPI_DEVICE_SELECTOR

See ONEAPI_DEVICE_SELECTOR

This device selection environment variable can be used to limit the choice of devices available when the SYCL-using application is run. Useful for limiting devices to a certain type (like GPUs or accelerators) or backends (like Level Zero or OpenCL). This device selection mechanism is replacing SYCL_DEVICE_FILTER. The ONEAPI_DEVICE_SELECTOR syntax is shared with OpenMP and also allows sub-devices to be chosen.

SYCL_DEVICE_FILTER

(deprecated)

backend:device_type:device_num

Use the ONEAPI_DEVICE_SELECTOR environment variable instead.

SYCL_DEVICE_ALLOWLIST

See SYCL_DEVICE_ALLOWLIST

Filter out devices that do not match the pattern specified. BackendName accepts host, opencl, level_zero, or cuda. DeviceType accepts host, cpu, gpu, or acc. DeviceVendorId accepts uint32_t in hex form (0xXYZW). DriverVersion, PlatformVersion, DeviceName, and PlatformName accept regular expression. Special characters, such as parenthesis, must be escaped. DPC++ runtime will select only those devices which satisfy provided values above and RegEx. More than one device can be specified using the piping symbol "|".

SYCL_DISABLE_PARALLEL_FOR_RANGE_ROUNDING

Any(*)

Disables automatic rounding-up of parallel_for invocation ranges.

SYCL_CACHE_DIR

Path

Path to persistent cache root directory. Default values are %AppData%\libsycl_cache for Windows and $XDG_CACHE_HOME/libsycl_cache on Linux, if XDG_CACHE_HOME is not set then $HOME/.cache/libsycl_cache. When none of the environment variables are set, a SYCL persistent cache is disabled.

SYCL_CACHE_DISABLE_PERSISTENT

(deprecated)

Any(*)

Has no effect.

SYCL_CACHE_PERSISTENT

Integer

Controls persistent device compiled code cache. Turns it on if set to '1' and turns it off if set to '0'. When cache is enabled SYCL runtime will try to cache and reuse JIT-compiled binaries. Default is off.

SYCL_CACHE_EVICTION_DISABLE

Any(*)

Switches cache eviction off when the variable is set.

SYCL_CACHE_MAX_SIZE

Positive integer

Cache eviction is triggered once total size of cached images exceeds the value in megabytes (default - 8 192 for 8 GB). Set to 0 to disable size-based cache eviction.

SYCL_CACHE_THRESHOLD

Positive integer

Cache eviction threshold in days (default value is 7 for 1 week). Set to 0 for disabling time-based cache eviction.

SYCL_CACHE_MIN_DEVICE_IMAGE_SIZE

Positive integer

Minimum size of device code image in bytes which is reasonable to cache on disk because disk access operation may take more time than do JIT compilation for it. Default value is 0 to cache all images.

SYCL_CACHE_MAX_DEVICE_IMAGE_SIZE

Positive integer

Maximum size of device image in bytes which is cached. Too big kernels may overload disk too fast. Default value is 1 GB.

SYCL_ENABLE_DEFAULT_CONTEXTS

'1' or '0'

Enable ('1') or disable ('0') creation of default platform contexts in SYCL runtime. The default context for each platform contains all devices in the platform. Refer to Platform Default Contexts extension to learn more. Enabled by default on Linux and disabled on Windows.

SYCL_RT_WARNING_LEVEL

Positive integer

The higher warning level is used the more warnings and performance hints the runtime library may print. Default value is '0', which means no warning/hint messages from the runtime library are allowed. The value '1' enables performance warnings from device runtime/codegen. The values greater than 1 are reserved for future use.

SYCL_USM_HOSTPTR_IMPORT

Integer

Enable by specifying non-zero value. Buffers created with a host pointer will result in host data promotion to USM, improving data transfer performance. To use this feature, also set SYCL_HOST_UNIFIED_MEMORY=1.

SYCL_EAGER_INIT

Integer

Enable by specifying non-zero value. Tells the SYCL runtime to do as much as possible initialization at objects construction as opposed to doing lazy initialization on the fly. This may mean doing some redundant work at warmup but ensures fastest possible execution on the following hot and reportable paths. It also instructs PI plugins to do the same. Default is "0".

SYCL_REDUCTION_PREFERRED_WORKGROUP_SIZE

See SYCL_REDUCTION_PREFERRED_WORKGROUP_SIZE

Controls the preferred work-group size of reduction.

SYCL_ENABLE_FUSION_CACHING

'1' or '0'

Enable ('1') or disable ('0') caching of JIT compilations for kernel fusion. Caching avoids repeatedly running the JIT compilation pipeline if the same sequence of kernels is fused multiple times. Default value is '1'.

NOTE:
Any(*) indicates that this environment variable is effective when set to any non-null value.

Controlling DPC++ Level Zero Plugin

Environment Variable

Default Value

Description

SYCL_ENABLE_PCI

Integer

When set to 1, enables obtaining the GPU PCI address when using the Level Zero backend. The default is 1. This option is kept for compatibility reasons and is immediately deprecated.

SYCL_PI_LEVEL_ZERO_DISABLE_USM_ALLOCATOR

Any(*)

Disable USM allocator in Level Zero plugin (each memory request will go directly to Level Zero runtime).

SYCL_PI_LEVEL_ZERO_TRACK_INDIRECT_ACCESS_MEMORY

Any(*)

Enable support of the kernels with indirect access and corresponding deferred release of memory allocations in the Level Zero plugin.

NOTE:
Any(*) indicates that this environment variable is effective when set to any non-null value.


NOTE:

Some environment variables are available for both Intel® microprocessors and non-Intel microprocessors, but may perform additional optimizations for Intel® microprocessors than for non-Intel microprocessors.