Intel® Fortran Compiler Classic and Intel® Fortran Compiler Developer Guide and Reference

ID 767251
Date 3/31/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Use Coarrays

Coarrays are not supported on macOS systems.

Coarrays, a data sharing concept standardized in Fortran 2008 and extended in Fortran 2018, enable parallel processing using multiple copies of a single program. Each copy, called an image, has ordinary local variables and also shared variables called coarrays or covariables.

A covariable, which can be either an array or a scalar, is a variable whose storage spans all the images of the team that was current when the covariable was established. In this Partitioned Global Address Space (PGAS) model, each image can access its own piece of a covariable as a local variable and can access those pieces that live on other images using coindices, which are enclosed in square brackets.

Intel® Fortran supports coarray programs that run using shared memory on a multicore or multiprocessor system. In some products (see the Feature Requirements section), coarray programs can also be built to run using distributed memory across a Linux* or Windows* cluster.

For more details, see the product system requirements in the Release Notes.

NOTE:

Coarrays are only supported on 64-bit architectures.

For more information on how to write programs using coarrays, see books on the Fortran 2008 and Fortran 2018 languages or the ISO Fortran 2008 and Fortran 2018 standards.

Use Coarray Program Syntax

The additional syntax required by Fortran 2008 coarrays includes:

  • The CODIMENSION attribute and "[cobounds]" to declare an object a coarray (covariable)

  • The [coindices] notation to reference covariables on other images

  • The SYNC ALL, SYNC IMAGES, and SYNC MEMORY statements to provide points where images must communicate to synchronize shared data

  • The CRITICAL and END CRITICAL statements to form a block of code executed by one image at a time

  • The LOCK and UNLOCK statements to control objects called locks, used to synchronize actions on specific images

  • The ERROR STOP statement to end all images

  • The ALLOCATE and DEALLOCATE statements to specify coarrays

  • Intrinsic procedures IMAGE_INDEX, LCOBOUND, NUM_IMAGES, THIS_IMAGE, and UCOBOUND

  • Atomic subroutines ATOMIC_DEFINE and ATOMIC_REF to define and reference an atomic variable

The following Fortran 2018 coarray extensions are also supported:

  • The CHANGE TEAM and END TEAM statements to change the current team on which the image is executing

  • The EVENT POST and EVENT WAIT statements to synchronize execution between two images

  • The FAIL IMAGE statement to simulate a failed image

  • The FORM TEAM statement to create one or more teams of images from the current team

  • The SYNC TEAM statement to synchronize a team of images

  • Intrinsic procedures COSHAPE, EVENT_QUERY, FAILED_IMAGES, GET_TEAM, IMAGE_STATUS, STOPPED IMAGES, and TEAM_NUMBER

  • New forms of the IMAGE_INDEX, NUM_IMAGES, and THIS_IMAGE with optional TEAM and/or TEAM_NUMBER arguments

  • Atomic subroutines ATOMIC_ADD, ATOMIC_AND, ATOMIC_CAS, ATOMIC_FETCH_ADD, ATOMIC_FETCH_AND, ATOMIC_FETCH_OR, ATOMIC_FETCH_XOR, ATOMIC_OR, and ATOMIC_XOR

  • Collective subroutines CO_BROADCAST, CO_MAX, CO_MIN, CO_REDUCE, and CO_SUM

  • Optional STAT= and ERRMSG= specifiers on a CRITICAL construct, optional arguments STAT and ERRMSG for the MOVE_ALLOC intrinsic, and optional STAT=, TEAM=, and TEAM_NUMBER specifiers on image selectors, and an optional STAT argument to ATOMIC_DEFINE and ATOMIC_REF subroutines

  • TYPE TEAM_TYPE defined in the intrinsic module ISO_FORTRAN_ENV allows creation of team variables

  • The constants INITIAL_TEAM, CURRENT_TEAM, and PARENT_TEAM defined in the intrinsic module ISO_FORTRAN_ENV

  • An image selector that has a TEAM=, TEAM_NUMBER= or a STAT= specifier.

Use the Coarray Compiler Options

You must sepecify the -coarray (Linux) or /Qcoarray (Windows) compiler option (hereafter referred to as [Q]coarray) to enable the compiler to recognize coarray syntax. If you do not specify this compiler option, a program that uses coarray syntax or features produces a compile-time error.

Only one [Q]coarray option is valid on the command line. If multiple coarray compiler options are specified, the last one specified is used. An exception to this rule is the [Q]coarray compiler option using keyword single; if specified, this option takes precedence regardless of where it appears on the command line.

The following describes the option keywords:

  • Using [Q]coarray with no keyword is equivalent to running on one node (shared memory).

  • Using [Q]coarray with keyword shared causes the underlying Intel® MPI Library parallelization to run on one node with multiple cores or processors with shared memory.

  • Using [Q]coarray with keyword distributed requires a special license to be installed (see the Feature Requirements section) and causes the underlying Intel® MPI Library parallelization to run in a multi-node environment (multiple CPUs with distributed memory).

  • Using [Q]coarray with keyword single creates an executable that will not be replicated, resulting in a single running image. This is in contrast to the self-replicating behavior that occurs when any other coarray keyword is specified. This option is useful for debugging purposes.

No special procedure is necessary to run a program that uses coarrays. You simply run the executable file.

The underlying parallelization implementation uses the Intel® MPI Library. Installation of the compiler automatically installs the necessary runtime libraries to run on shared memory. Products supporting clusters will also install the necessary runtime libraries to run on distributed memory. Use of coarray applications with any other Intel® MPI Library implementation, or with OpenMP*, is not supported.

By default, the number of images created is equal to the number of execution units on the current system. You can override this by specifying a number using the [Q]coarray-num-images compiler option on the ifort command line that compiles the main program. You can also specify the number of images at execution time in the environment variable FOR_COARRAY_NUM_IMAGES.

Use a Configuration File

Using a configuration file by specifying compiler option [Q]coarray-num-images is appropriate in only a limited number of cases.

The main reason is if you want to take advantage of Intel® MPI Library features in the coarray environment. To do so, specify the command line segments used by "mpiexec -config filename" in a file named filename and pass that file name to the Intel® MPI Library using the [Q]coarray-config-file compiler option.

If the [Q]coarray-num-images compiler option also appears on the command line, it will be overridden by what is in the configuration file.

Rules for using an Intel® MPI Library configuration files are as follows:

  • The format of a configuration file is described in the Intel® MPI Library documentation. You will need to add the Intel® MPI Library option "-genv FOR_ICAF_STATUS launched" in the configuration file in order for coarrays to work on multi-node (distributed memory) systems.

  • You can also set the environment variable FOR_COARRAY_CONFIG_FILE to be the filename and path of the Intel® MPI Library configuration file you want to use at execution time.

Use Configuration Environment Variables

Intel Fortran uses Intel® MPI Library as the transport layer for the coarray feature. Intel® MPI Library can be tuned to a particular usage pattern with environment variables.

Intel Fortran has chosen to set some Intel® MPI Library control variables to values that are good for most users and many patterns of coarray usage. However, you may want to experiment with other variables that Intel® Fortran does not set. They are not set by Intel Fortran because they may reduce performance with other usage patterns or because they may cause errors when used with older versions of Intel® MPI Library.

Applications running on shared memory with Intel® MPI Library version 2019 Update 5 or greater may benefit from setting the following two variables to "shm'":

I_MPI_FABRICS

I_MPI_DEVICE

NOTE:

When the above environment variables are set on Linux systems, there may be hangs on Red Hat 7.2 and Ubuntu because they cause increased use of shared memory. Therefore, please note that you may need to increase the size of /dev/shm to avoid Linux bus error (SIGBUS).

Applications which over-subscribe (have more coarray images running than there are actual processors in the machine) may benefit from setting the following variable to 1:

I_MPI_WAIT_MODE

Applications which over-subscribe a great deal (more than four images per processor) may benefit from setting the following variable to 3:

I_MPI_THREAD_YIELD

This is just an introduction to using configuration environment variables. There are many other environment variables that you can use to tune Intel® MPI Library. For more information, see topic Other Environment Variables in the Intel® MPI Library Developer Reference documentation.

Examples

Linux

  • -coarray=shared -coarray-num-images=8

    The above runs a coarray program on shared memory using 8 images.

  • -coarray=distributed -coarray-num-images=8

    The above runs a coarray program on distributed memory across 8 images.

Windows

  • /Qcoarray:shared /Qcoarray-num-images:8

    The above runs a coarray program on shared memory using 8 images.

  • /Qcoarray:shared /Qcoarray-config-file:filename

    The above runs a coarray program on shared memory using the Intel® MPI Library configuration detailed in filename.

  • /Qcoarray:distributed /Qcoarray-config-file:filename

    The above runs a coarray program on distributed memory using the Intel® MPI Library configuration detailed in filename.