Get Started with the Intel® MPI Library for Linux* OS

ID 768724
Date 11/07/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Get Started with Intel® MPI Library for Intel® oneAPI on Linux* OS

The Intel® MPI Library enables you to create, maintain, and test advanced applications that have performance advantages on high-performance computing (HPC) clusters based on Intel® processors.

The Intel MPI Library is available as a standalone product and as part of the Intel® HPC Toolkit.The Intel MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, version 3.1 (MPI-3.1) specification. Use the library to develop applications that can run on multiple cluster interconnects.

The Intel MPI Library has the following features:

  • Scalability up to 340k processes
  • Low overhead enables analysis of large amounts of data
  • MPI tuning utility for accelerating your applications
  • Interconnect independence and flexible runtime fabric selection

The product consists of the following main components:

  • Compilation tools, including compiler drivers such as mpiicc and mpifort
  • Include files and modules
  • Shared (.so) and static (.a) libraries, debug libraries, and interface libraries
  • Process Manager and tools to run programs
  • Test code
  • Documentation provided as a separate package or available from the Intel Developer Zone

Intel MPI Library also includes Intel® MPI Benchmarks, which enable you to measure MPI operations on various cluster architectures and MPI implementations. For details, see the Intel® MPI Benchmarks User Guide. Source code is available in the GitHub repository.

For other more information on using the Intel MPI Library see Intel® MPI Library Developer Guide for Linux* OS and Intel® MPI Library Developer Reference for Linux* OS.

Key Features

The Intel MPI Library has the following major features:

  • MPI-1, MPI-2.2 and MPI-3.1 specification conformance
  • Interconnect independence
  • Supported Languages:

    • For GNU* compilers: C, C++, Fortran 77, Fortran 95
    • For Intel® compilers: C, C++, Fortran 77, Fortran 90, Fortran 95, Fortran 2008

Prerequisites

Before you start using Intel MPI Library, complete the following steps:

1. Source the setvars.sh script to set the environment variables for the Intel MPI Library. The script is located in the installation directory (by default, /opt/intel/oneapi).

NOTE:
If you are using Intel MPI in a Unified Directory Layout, set the environment variables using the /opt/intel/oneapi/<toolkit-version-number>/oneapi_vars.sh script instead. To understand more about the Unified Directory Layout, including how the environment is initialized and the advantages of using the layout, see Use the setvars and oneapi-vars Scripts with Linux*.

2. Create a hostfile text file that lists the nodes in the cluster using one host name per line. For example:

clusternode1
clusternode2

3. Make sure the passwordless SSH connection is established among all nodes of the cluster. It ensures the proper communication of MPI processes among the nodes.

After completing these steps, you are ready to use the Intel MPI Library.

For detailed system requirements, see Intel® MPI Library System Requirements.

To set the development environment using modulefile, see Use Modulefiles with Linux*.

Building and Running MPI Programs

Compiling an MPI Program

1. Make sure you have a compiler in your PATH. To check this, run the which command on the desired compiler. For example:

$ which icc 
/opt/intel/oneapi/compiler/<version>.<update>/linux/bin/icc

2. Compile a test program using the appropriate compiler driver. For example:

$ mpiicc -o myprog <install-dir>/test/test.c

Running an MPI Program

Use the previously created hostfile and run your program with the mpirun command as follows:

$ mpirun -n <# of processes> -ppn <# of processes per node> -f ./hostfile ./myprog

For example:

$ mpirun -n 2 -ppn 1 -f ./hostfile ./myprog

The test program above produces output in the following format:

Hello world: rank 0 of 2 running on clusternode1
Hello world: rank 1 of 2 running on clusternode2

This output indicates that you properly configured your environment and Intel MPI Library successfully ran the test MPI program on the cluster.

Troubleshooting

If you encounter problems when using Intel MPI Library, go through the following general procedures to troubleshoot them:

  • Check known issues and limitations in the Release Notes.
  • Check hosts accessibility. Run a simple non-MPI application (for example, hostname utility) on the problem hosts with mpirun. This check helps you reveal an environmental problem (for example, SSH is not configured properly), or connectivity problem (for example, unreachable hosts).
  • Run the MPI application with debug information enabled. To enable the debug information, set the environment variable I_MPI_DEBUG=6. You can also set a different debug level to get more detailed information. This action helps you find the problem component.

See more details in the “Troubleshooting” section of the Developer Guide.

More Resources