Environment Variables for Process Pinning
I_MPI_PIN
Turn on/off process pinning.
Syntax
I_MPI_PIN=<arg>
Arguments
<arg> | Binary indicator |
enable | yes | on | 1 | Enable process pinning. This is the default value. |
disable | no | off | 0 | Disable process pinning. |
Description
Set this environment variable to control the process pinning feature of the Intel® MPI Library.
I_MPI_PIN_PROCESSOR_LIST
Define a processor subset and the mapping rules for MPI processes within this subset.
This environment variable is available for both Intel and non-Intel microprocessors, but it may perform additional optimizations for Intel microprocessors than it performs for non-Intel microprocessors.
Syntax Forms
I_MPI_PIN_PROCESSOR_LIST=<value>
The environment variable value has two syntax forms:
- <proclist>
- allcores
Syntax 1: <proclist>
I_MPI_PIN_PROCESSOR_LIST=<proclist>
Arguments
<proclist> | A comma-separated list of logical processor numbers and/or ranges of processors. The process with the i-th rank is pinned to the i-th processor in the list. The number should not exceed the number of processors on a node. |
<l> | Processor with logical number <l>. |
<l>-<m> | Range of processors with logical numbers from <l> to <m>. |
<k>,<l>-<m> | Processors <k>, as well as <l> through <m>. |
Syntax 2: allcores
I_MPI_PIN_PROCESSOR_LIST=allcores
Arguments
allcores | All cores (physical CPUs). Specify this subset to define the number of cores on a node. This is the default value. If Intel® Hyper-Threading Technology is disabled, allcores equals to all. |
Examples
To pin the processes to CPU0 and CPU3 on each node globally, use the following command:
$ mpirun -genv I_MPI_PIN_PROCESSOR_LIST=0,3 -n <number-of-processes> <executable>
To pin the processes to different CPUs on each node individually (CPU0 and CPU3 on host1 and CPU0, CPU1 and CPU3 on host2), use the following command:
$ mpirun -host host1 -env I_MPI_PIN_PROCESSOR_LIST=0,3 -n <number-of-processes> <executable> : \ -host host2 -env I_MPI_PIN_PROCESSOR_LIST=1,2,3 -n <number-of-processes> <executable>
To print extra debugging information about process pinning, use the following command:
$ mpirun -genv I_MPI_DEBUG=4 -m -host host1 \ -env I_MPI_PIN_PROCESSOR_LIST=0,3 -n <number-of-processes> <executable> :\ -host host2 -env I_MPI_PIN_PROCESSOR_LIST=1,2,3 -n <number-of-processes> <executable>
I_MPI_PIN_PROCESSOR_EXCLUDE_LIST
Define a subset of logical processors to be excluded for the pinning capability on the intended hosts.
Syntax
I_MPI_PIN_PROCESSOR_EXCLUDE_LIST=<proclist>
Arguments
<proclist> | A comma-separated list of logical processor numbers and/or ranges of processors. |
<l> | Processor with logical number <l>. |
<l>-<m> | Range of processors with logical numbers from <l>to <m>. |
<k>,<l>-<m> | Processors <k>, as well as <l>through <m>. |
Description
Set this environment variable to define the logical processors that Intel® MPI Library does not use for pinning capability on the intended hosts. Logical processors are numbered as in /proc/cpuinfo.
I_MPI_PIN_CELL
Set this environment variable to define the pinning resolution granularity. I_MPI_PIN_CELL specifies the minimal processor cell allocated when an MPI process is running.
Syntax
I_MPI_PIN_CELL=<cell>
Arguments
<cell> | Specify the resolution granularity |
unit | Basic processor unit (logical CPU) |
core | Physical processor core |
Description
Set this environment variable to define the processor subset used when a process is running. You can choose from two scenarios:
- all possible CPUs in a node (unit value)
- all cores in a node (core value)
The environment variable has effect on both pinning types:
- one-to-one pinning through the I_MPI_PIN_PROCESSOR_LIST environment variable
- one-to-many pinning through the I_MPI_PIN_DOMAIN environment variable
The default value rules are:
- If you use I_MPI_PIN_DOMAIN, the cell granularity is unit.
- If you use I_MPI_PIN_PROCESSOR_LIST, the following rules apply:
- When the number of processes is greater than the number of cores, the cell granularity is unit.
- When the number of processes is equal to or less than the number of cores, the cell granularity is core.
I_MPI_PIN_RESPECT_CPUSET
Respect the process affinity mask.
Syntax
I_MPI_PIN_RESPECT_CPUSET=<value>
Arguments
<value> | Binary indicator |
enable | yes | on | 1 | Respect the process affinity mask. This is the default value. |
disable | no | off | 0 | Do not respect the process affinity mask. |
Description
If you set I_MPI_PIN_RESPECT_CPUSET=enable, the Hydra process launcher uses job manager's process affinity mask on each intended host to determine logical processors for applying Intel MPI Library pinning capability.
If you set I_MPI_PIN_RESPECT_CPUSET=disable, the Hydra process launcher uses its own process affinity mask to determine logical processors for applying Intel MPI Library pinning capability.
I_MPI_PIN_RESPECT_HCA
In the presence of Infiniband architecture* host channel adapter (IBA* HCA), adjust the pinning according to the location of IBA HCA.
Syntax
I_MPI_PIN_RESPECT_HCA=<value>
Arguments
<value> | Binary indicator |
enable | yes | on | 1 | Use the location of IBA HCA if available. This is the default value. |
disable | no | off | 0 | Do not use the location of IBA HCA. |
Description
If you set I_MPI_PIN_RESPECT_HCA=enable , the Hydra process launcher uses the location of IBA HCA on each intended host for applying Intel MPI Library pinning capability.
If you set I_MPI_PIN_RESPECT_HCA=disable, the Hydra process launcher does not use the location of IBA HCA on each intended host for applying Intel MPI Library pinning capability.