Intel® MPI Library Developer Guide for Windows* OS

ID 768730
Date 12/16/2022
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Controlling Process Placement

Placement of MPI processes over the cluster nodes plays a significant role in application performance. Intel® MPI Library provides several options to control process placement.

By default, when you run an MPI program, the process manager launches all MPI processes specified with -n on the current node. If you use a job scheduler, processes are assigned according to the information received from the scheduler.

Specifying Hosts

You can explicitly specify the nodes on which you want to run the application using the -hosts option. This option takes a comma-separated list of node names as an argument. Use the -ppn option to specify the number of processes per node. For example:

> mpiexec -n 4 -ppn 2 -hosts node1,node2 testc.exe 
Hello world: rank 0 of 4 running on node1
Hello world: rank 1 of 4 running on node1
Hello world: rank 2 of 4 running on node2
Hello world: rank 3 of 4 running on node2

To get the name of a node, use the hostname utility.

An alternative to using the -hosts option is creation of a host file that lists the cluster nodes. The format of the file is one name per line, and the lines starting with # are ignored. Use the -f option to pass the file to mpi exec. For example:

> type hosts
#nodes
node1
node2
> mpiexec -n 4 -ppn 2 -f hosts testc.exe

This program launch produces the same output as the previous example.

If the -ppn option is not specified, the process manager assigns as many processes to the first node as there are physical cores on it. Then the next node is used. That is, assuming there are four cores on node1 and you launch six processes overall, four processes are launched on node1, and the remaining two processes are launched on node2. For example:

> mpiexec -n 6 -hosts node1,node2 testc.exe
Hello world: rank 0 of 6 running on node1
Hello world: rank 1 of 6 running on node1
Hello world: rank 2 of 6 running on node1
Hello world: rank 3 of 6 running on node1
Hello world: rank 4 of 6 running on node2
Hello world: rank 5 of 6 running on node2
NOTE:
If you use a job scheduler, specifying hosts is unnecessary. The processes manager uses the host list provided by the scheduler.

Using a Machine File

A machine file is similar to a host file with the only difference that you can assign a specific number of processes to particular nodes directly in the file. Contents of a sample machine file may look as follows:

> type machines
node1:2
node2:2

Specify the file with the -machine option. Running a simple test program produces the following output:

> mpiexec  -machine machines testc.exe
Hello world: rank 0 of 4 running on node1
Hello world: rank 1 of 4 running on node1
Hello world: rank 2 of 4 running on node2
Hello world: rank 3 of 4 running on node2

Using Argument Sets

Argument sets are unique groups of arguments specific to a particular node. Combined together, the argument sets make up a single MPI job. You can provide argument sets on the command line, or in a configuration file. To specify a node, use the -host option.

On the command line, argument sets should be separated by a colon ':'. Global options (applied to all argument sets) should appear first, and local options (applied only to the current argument set) should be specified within an argument set. For example:

> mpiexec  -genv I_MPI_DEBUG=2 -host node1 -n 2 testc.exe : -host node1 -n 2 testc.exe

In the configuration file, each argument set should appear on a new line. Global options should appear on the first line of the file. For example:

> type config 
-genv I_MPI_DEBUG=2-host node1 -n 2 testc.exe
-host node2 -n 2 testc.exe

Specify the configuration file with the -configfile option:

> mpiexec -configfile config
Hello world: rank 0 of 4 running on node1
Hello world: rank 1 of 4 running on node1
Hello world: rank 2 of 4 running on node2
Hello world: rank 3 of 4 running on node2