Intel® Fortran Compiler Classic and Intel® Fortran Compiler Developer Guide and Reference

ID 767251
Date 6/24/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Use Coarrays

Intel® Fortran supports coarray programs that run using shared memory on a multicore or multiprocessor system. Coarray programs can also be built to run using distributed memory across a Linux* or Windows* cluster.

Coarrays are only supported on 64-bit architectures. For more details, see the product system requirements in the Release Notes.

Coarrays, a data sharing concept standardized in Fortran 2008 and extended in Fortran 2018 and 2023, enable parallel processing using multiple copies of a single program. Each copy, called an image, has ordinary local variables and also shared variables called coarrays or covariables.

A covariable, which can be either an array or a scalar, is a variable whose storage spans all the images of the team that was current when the covariable was established. In this Partitioned Global Address Space (PGAS) model, each image can access its own piece of a covariable as a local variable and can access those pieces that live on other images using coindices, which are enclosed in square brackets.

For more information on how to write programs using coarrays, see books on the Fortran 2008 and later language versions, or the ISO Fortran 2008 and later standards versions.

Use Coarray Program Syntax

The additional syntax required by Fortran 2008 coarrays includes:

  • The CODIMENSION attribute and [cobounds] to declare an object a coarray (covariable)

  • The [coindices] notation to reference covariables on other images

  • The SYNC ALL, SYNC IMAGES, and SYNC MEMORY statements to provide points where images must communicate to synchronize shared data

  • The CRITICAL and END CRITICAL statements to form a block of code executed by one image at a time

  • The LOCK and UNLOCK statements to control objects called locks, used to synchronize actions on specific images

  • The ERROR STOP statement to end all images

  • The ALLOCATE and DEALLOCATE statements to specify coarrays

  • Intrinsic procedures IMAGE_INDEX, LCOBOUND, NUM_IMAGES, THIS_IMAGE, and UCOBOUND

  • Atomic subroutines ATOMIC_DEFINE and ATOMIC_REF to define and reference an atomic variable

The following Fortran 2018 coarray extensions are also supported:

  • The CHANGE TEAM and END TEAM statements to change the current team on which the image is executing

  • The EVENT POST and EVENT WAIT statements to synchronize execution between two images

  • The FAIL IMAGE statement to simulate a failed image

  • The FORM TEAM statement to create one or more teams of images from the current team

  • The SYNC TEAM statement to synchronize a team of images

  • Intrinsic procedures COSHAPE, EVENT_QUERY, FAILED_IMAGES, GET_TEAM, IMAGE_STATUS, STOPPED IMAGES, and TEAM_NUMBER

  • New forms of the IMAGE_INDEX, NUM_IMAGES, and THIS_IMAGE with optional TEAM and/or TEAM_NUMBER arguments

  • Atomic subroutines ATOMIC_ADD, ATOMIC_AND, ATOMIC_CAS, ATOMIC_FETCH_ADD, ATOMIC_FETCH_AND, ATOMIC_FETCH_OR, ATOMIC_FETCH_XOR, ATOMIC_OR, and ATOMIC_XOR

  • Collective subroutines CO_BROADCAST, CO_MAX, CO_MIN, CO_REDUCE, and CO_SUM

  • Optional STAT= and ERRMSG= specifiers on a CRITICAL construct, optional arguments STAT and ERRMSG for the MOVE_ALLOC intrinsic, and optional STAT=, TEAM=, and TEAM_NUMBER specifiers on image selectors, and an optional STAT argument to ATOMIC_DEFINE and ATOMIC_REF subroutines

  • TYPE TEAM_TYPE defined in the intrinsic module ISO_FORTRAN_ENV allows creation of team variables

  • The constants INITIAL_TEAM, CURRENT_TEAM, and PARENT_TEAM defined in the intrinsic module ISO_FORTRAN_ENV

  • An image selector that has a TEAM=, TEAM_NUMBER= or a STAT= specifier.

Use the Coarray Compiler Options

You must specify the -coarray (Linux) or /Qcoarray (Windows) compiler option (hereafter referred to as [Q]coarray) to enable the compiler to recognize coarray syntax. If you do not specify this compiler option, a program that uses coarray syntax or features produces a compile-time error.

Only one [Q]coarray option is valid on the command line. If multiple coarray compiler options are specified, the last one specified is used. An exception to this rule is the [Q]coarray compiler option using keyword single; if specified, this option takes precedence regardless of where it appears on the command line.

The following describes the option keywords:

  • Using [Q]coarray causes the underlying Intel® MPI Library parallelization to run on multiple cores.

  • Using [Q]coarray-config-file:file can extend the execution to other nodes in a distributed system.

  • Using [Q]coarray with keyword single creates an executable that will not be replicated, resulting in a single running image. This is in contrast to the self-replicating behavior that occurs when any other coarray keyword is specified. This option is useful for debugging purposes.

  • Using [Q]coarray-num-images allows you to specify the number of images that can be used to run a coarray executable.

No special procedure is necessary to run a program that uses coarrays. You simply run the executable file.

The underlying parallelization implementation uses the Intel® MPI Library. Installation of the compiler automatically installs the necessary runtime libraries to run on shared memory. Products supporting clusters will also install the necessary runtime libraries to run on distributed memory. Use of coarray applications with any other Intel® MPI Library implementation, or with OpenMP*, is not supported.

NOTE:
The conda package for the Intel® Fortran Compiler runtime no longer has a runtime dependency on the Intel® MPI Library, which is needed to enable coarrays. If you maintain a conda package that has a runtime dependency on the Intel Fortran Compiler runtime and your application uses the Intel® MPI Library, you need to explicitly add the impi_rt conda package for the Intel® MPI Library to the list of runtime dependencies in your project's meta.yaml file.

By default, the number of images created is equal to the number of execution units on the current system. You can override this by specifying a number using the [Q]coarray-num-images compiler option on the command line that compiles the main program. You can also specify the number of images at execution time in the environment variable FOR_COARRAY_NUM_IMAGES.

Use a Configuration File

Using a configuration file by specifying compiler option [Q]coarray-config-file is useful if you are looking for more control of your image placement, especially in multi-node systems.

The main reason is if you want to take advantage of Intel® MPI Library features in the coarray environment. To do so, specify the command line segments used by mpiexec -config filename in a file named filename and pass that file name to the Intel® MPI Library using the [Q]coarray-config-file compiler option.

If the [Q]coarray-num-images compiler option also appears on the command line, it will be overridden by what is in the configuration file.

Rules for using an Intel® MPI Library configuration files are as follows:

  • The format of a configuration file is described in the Intel® MPI Library documentation. You will need to add the Intel® MPI Library option -genv FOR_ICAF_STATUS launched in the configuration file in order for coarrays to work on multi-node (distributed memory) systems.

  • You can also set the environment variable FOR_COARRAY_CONFIG_FILE to be the filename and path of the Intel® MPI Library configuration file you want to use at execution time.

Use Configuration Environment Variables

Intel Fortran uses Intel® MPI Library as the transport layer for the coarray feature. Intel® MPI Library can be tuned to a particular usage pattern with environment variables.

Intel Fortran has chosen to set some Intel® MPI Library control variables to values that are good for most users and many patterns of coarray usage. However, you may want to experiment with other variables that Intel® Fortran does not set. They are not set by Intel Fortran because they may reduce performance with other usage patterns or because they may cause errors when used with older versions of Intel® MPI Library.

Applications running on shared memory with Intel® MPI Library version 2019 Update 5 or greater may benefit from setting the following variable to shm.

I_MPI_FABRICS

NOTE:
When the above environment variables are set on Linux systems, there may be hangs on Red Hat 7.2 and Ubuntu because they cause increased use of shared memory. Therefore, please note that you may need to increase the size of /dev/shm to avoid Linux bus error (SIGBUS).

Applications which over-subscribe (have more coarray images running than there are actual processors in the machine) may benefit from setting the following variable to 1:

I_MPI_WAIT_MODE

Applications which over-subscribe a great deal (more than four images per processor) may benefit from setting the following variable to 3:

I_MPI_THREAD_YIELD

This is just an introduction to using configuration environment variables. There are many other environment variables that you can use to tune Intel® MPI Library. For more information, see:

Examples

Linux

  • -coarray -coarray-num-images=8

    This runs a coarray program on a single node using 8 images.

  • -coarray -coarray-config-file=filename -coarray-num-images=8

    This runs a coarray program using the Intel® MPI Library configuration detailed in filename to customize the number of nodes and other options. Uses 8 images unless a different number is specified in filename.

Windows

  • /Qcoarray /Qcoarray-num-images:8

    This runs a coarray program on a single node using 8 images.

  • /Qcoarray /Qcoarray-config-file:filename/Qcoarray-num-images=8

    This runs a coarray program using the Intel® MPI Library configuration detailed in filename to customize the number of nodes and other options. Uses 8 images unless a different number is specified in filename.