Intel® MPI Library Developer Guide for Linux* OS

ID 768728
Date 11/07/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Tracing Applications

The Intel® MPI Library provides a variety of options for analyzing MPI applications. Some of these options are available within the Intel MPI Library, while some require additional analysis tools. For these tools, the Intel MPI Library provides compilation and runtime options and environment variables for easier interoperability.

The Intel MPI Library is tightly integrated with the Intel® Trace Analyzer and Collector, which enables you to analyze and debug MPI applications. The Intel MPI Library has several compile- and runtime options to simplify the application analysis. Apart from the Intel Trace Analyzer and Collector, there is also a tool called Application Performance Snapshot intended for a higher level MPI analysis.

Intel Trace Analyzer and Collector is available as standalone software and as part of the Intel® HPC Toolkit. Before proceeding to the next steps, make sure you have the product installed.

High-Level Performance Analysis

For a high-level application analysis, Intel provides a lightweight analysis tool Application Performance Snapshot (APS), which can analyze MPI and non-MPI applications. The tool provides general information about the application, such as MPI and OpenMP* utilization time and load balance, MPI operations usage, memory and disk usage, and other information. This information enables you to get a general idea about the application performance and identify spots for a more thorough analysis.

Follow these steps to analyze an application with the APS:

  1. Set up the environment for the compiler, Intel MPI Library, and APS.
    $ source <install-dir>/setvars.sh
    $ source<install-dir>/vtune/<version>/env/vars.sh
    NOTE:
    If you are using Intel MPI in a Unified Directory Layout, set the environment variables using the /opt/intel/oneapi/<toolkit-version-number>/oneapi_vars.sh script instead. To understand more about the Unified Directory Layout, including how the environment is initialized and the advantages of using the layout, see Use the setvars and oneapi-vars Scripts with Linux* .
  2. Run your application with the -aps option of mpirun:
    $ mpirun -n 4 -aps ./myprog

    APS will generate a directory with the statistics files aps_result_<date>-<time>.

  3. Launch the aps-report tool and pass the generated statistics to the tool:
    $ aps-report ./aps_result_<date>-<time>

    You will see the analysis results printed in the console window. Also, APS will generate an HTML report aps_report_<date>_<time>.html containing the same information.

For more details, refer to the Application Performance Snapshot User Guide.

Trace an Application

To analyze an application with the Intel Trace Analyzer and Collector, first you need generate a trace file of your application, and then open this file in Intel® Trace Analyzer to analyze communication patterns, time utilization, and other elements. Tracing is performed by preloading the Intel Trace Collector profiling library at runtime, which intercepts all MPI calls and generates a trace file. Intel MPI Library provides the -trace (-t) option to simplify this process.

Complete the following steps:

  1. Set up the environment for the Intel MPI Library, and Intel Trace Analyzer and Collector.
    $ source <mpi-install-dir>/env/vars.sh
    $ source <itac-install-dir>/env/vars.sh
  2. Trace your application with the Intel Trace Collector:
    $ mpirun -trace -n 4 ./myprog 

    As a result, a trace file .stf is generated. For the example above, it is myprog.stf.

  3. Analyze the application with the Intel Trace Analyzer:
    $ traceanalyzer ./myprog.stf &

The workflow above is the most common scenario of tracing with the Intel Trace Collector. For other tracing scenarios, see the Intel Trace Collector documentation.