Visible to Intel only — GUID: GUID-40A2B1F1-897F-4B2B-9A0B-EBFC5C3B85BF
Visible to Intel only — GUID: GUID-40A2B1F1-897F-4B2B-9A0B-EBFC5C3B85BF
Controlling Process Placement
Placement of MPI processes over the cluster nodes plays a significant role in application performance. Intel® MPI Library provides several options to control process placement.
By default, when you run an MPI program, the process manager launches all MPI processes specified with -n on the current node. If you use a job scheduler, processes are assigned according to the information received from the scheduler.
Specifying Hosts
You can explicitly specify the nodes on which you want to run the application using the -hosts option. This option takes a comma-separated list of node names as an argument. Use the -ppn option to specify the number of processes per node. For example:
$ mpirun -n 4 -ppn 2 -hosts node1,node2 ./testc Hello world: rank 0 of 4 running on node1 Hello world: rank 1 of 4 running on node1 Hello world: rank 2 of 4 running on node2 Hello world: rank 3 of 4 running on node2
To get the name of a node, use the hostname utility.
An alternative to using the -hosts option is creation of a host file that lists the cluster nodes. The format of the file is one name per line, and the lines starting with # are ignored. Use the -f option to pass the file to mpirun. For example:
$ cat ./hosts #nodes node1 node2 $ mpirun -n 4 -ppn 2 -f hosts ./testc
This program launch produces the same output as the previous example.
If the -ppn option is not specified, the process manager assigns as many processes to the first node as there are physical cores on it. Then the next node is used. That is, assuming there are four cores on node1 and you launch six processes overall, four processes are launched on node1, and the remaining two processes are launched on node2. For example:
$ mpirun -n 6 -hosts node1,node2 ./testc Hello world: rank 0 of 6 running on node1 Hello world: rank 1 of 6 running on node1 Hello world: rank 2 of 6 running on node1 Hello world: rank 3 of 6 running on node1 Hello world: rank 4 of 6 running on node2 Hello world: rank 5 of 6 running on node2
Using a Machine File
A machine file is similar to a host file with the only difference that you can assign a specific number of processes to particular nodes directly in the file. Contents of a sample machine file may look as follows:
$ cat ./machines node1:2 node2:2
Specify the file with the -machine option. Running a simple test program produces the following output:
$ mpirun -machine machines ./testc Hello world: rank 0 of 4 running on node1 Hello world: rank 1 of 4 running on node1 Hello world: rank 2 of 4 running on node2 Hello world: rank 3 of 4 running on node2
Using Argument Sets
Argument sets are unique groups of arguments specific to a particular node. Combined together, the argument sets make up a single MPI job. You can provide argument sets on the command line, or in a configuration file. To specify a node, use the -host option.
On the command line, argument sets should be separated by a colon ':'. Global options (applied to all argument sets) should appear first, and local options (applied only to the current argument set) should be specified within an argument set. For example:
$ mpirun -genv I_MPI_DEBUG=2 -host node1 -n 2 ./testc : -host node2 -n 2 ./testc
In the configuration file, each argument set should appear on a new line. Global options should appear on the first line of the file. For example:
$ cat ./config -genv I_MPI_DEBUG=2-host node1 -n 2 ./testc -host node2 -n 2 ./testc
Specify the configuration file with the -configfile option:
$ mpirun -configfile config Hello world: rank 0 of 4 running on node1 Hello world: rank 1 of 4 running on node1 Hello world: rank 2 of 4 running on node2 Hello world: rank 3 of 4 running on node2
See Also
Controlling Process Placement with the Intel® MPI Library (online article)