Intel® Fortran Compiler Classic and Intel® Fortran Compiler Developer Guide and Reference

ID 767251
Date 6/24/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Automatic Parallelization

The auto-parallelization feature of the Intel® Fortran Compiler automatically translates serial portions of the input program into equivalent multithreaded code. Automatic parallelization determines the loops that are good worksharing candidates, performs the dataflow analysis to verify correct parallel execution, and partitions the data for threaded code generation as needed in programming with OpenMP directives. The OpenMP and auto-parallelization functionality provides the performance gains from shared memory on multiprocessor and dual core systems.

The auto-parallelizer analyzes the dataflow of the loops in the application source code and generates multithreaded code for those loops which can safely and efficiently be executed in parallel.

This behavior enables the potential exploitation of the parallel architecture found in symmetric multiprocessor (SMP) systems.

Automatic parallelization frees developers from having to:

  • Find loops that are good worksharing candidates.
  • Perform the dataflow analysis to verify correct parallel execution.
  • Partition the data for threaded code generation as is needed in programming with OpenMP directives.

Although OpenMP directives enable serial applications to transform into parallel applications quickly, you must explicitly identify specific portions of your application code that contain parallelism and add the appropriate compiler directives. During compilation, the compiler automatically attempts to deconstruct the code sequences into separate threads for parallel processing. No other effort is needed.

NOTE:

Using this option enables parallelization for both Intel® microprocessors and non-Intel microprocessors. The resulting executable may get additional performance gain on Intel® microprocessors than on non-Intel microprocessors. The parallelization can also be affected by certain options, such as /arch (Windows), -m (Linux), or [Q]x.

Serial code can be divided so that the code can execute concurrently on multiple threads. For example, consider the following serial code example:

subroutine ser(a, b, c)
  integer, dimension(100) :: a, b, c
  do i=1,100
    a(i) = a(i) + b(i) * c(i)
  enddo 
end subroutine ser

The following example illustrates one method showing how the loop iteration space, shown in the previous example, might be divided to execute on two threads:

subroutine par(a, b, c)
  integer, dimension(100) :: a, b, c
  ! Thread 1
  do i=1,50
    a(i) = a(i) + b(i) * c(i)
  enddo
  ! Thread 2
  do i=51,100
    a(i) = a(i) + b(i) * c(i)
  enddo 
end subroutine par

Auto-Vectorization and Parallelization

Auto-vectorization detects low-level operations in the program that can be done in parallel, and then converts the sequential program to process 2, 4, 8, or (up to) 16 elements in one operation, depending on the data type. In some cases, auto-parallelization and vectorization can be combined for better performance results. For example, in the code below, thread-level parallelism can be exploited in the outermost loop, while instruction-level parallelism can be exploited in the innermost loop:

DO I = 1, 100     ! Execute groups of iterations in different threads (TLP)
  DO J = 1, 32    ! Execute in SIMD style with multimedia extension (ILP)
     A(J,I) = A(J,I) + 1
  ENDDO 
ENDDO

With the relatively small effort of adding OpenMP directives to existing code you can transform a sequential program into a parallel program. The [Q]openmp option must be specified to enable the OpenMP directives. The following example shows OpenMP directives within the code:

!OMP$ PARALLEL PRIVATE(NUM), SHARED (X,A,B,C) 
! Defines a parallel region  
!OMP$ PARALLEL DO 
! Specifies a parallel region that 
! implicitly contains a single DO directive 
DO I = 1, 1000
  NUM = FOO(B(i), C(I))
  X(I) = BAR(A(I), NUM) 
! Assume FOO and BAR have no other effect 
ENDDO

NOTE:

Options that use OpenMP are available for both Intel® and non-Intel microprocessors, but these options may perform additional optimizations on Intel® microprocessors than they perform on non-Intel microprocessors. The list of major, user-visible OpenMP constructs and features that may perform differently on Intel® microprocessors than on non-Intel microprocessors includes: locks (internal and user visible), the SINGLE construct, barriers (explicit and implicit), parallel loop scheduling, reductions, memory allocation, and thread affinity and binding.

Using Parallelism Reports

To generate a parallelism report, use the -qopt-report-phase=par (Linux) or the /Qopt-report-phase:par option along with the qopt-report=n or /Qopt-report:n option (Windows). By default, the auto-parallelism report generates a medium level of detail, where n=2. You can use the [q or Q]opt-report option along with the [q or Q]opt-report-phase option if you want a greater or lesser level of detail. To generate the maximum diagnostic details, specify 5 for ifort or 3 for ifx.

Run the report by entering commands similar to the following:

Linux

ifx -c -parallel qopt-report:3 sample.f90

Windows

ifx sample.f90 /c /Qparallel /Qopt-report:3  
NOTE:

Compiler option c prevents linking and instructs the compiler to stop compilation after the object file is generated. The example is compiled without generating an executable.

The output, by default, produces a file with the same name as the object file, with .yaml extension, and is written into the same directory as the object file. Using the above command-line entries, you will obtain an output file called sample.yaml. Use the [q or Q]opt-report-file option to specify any other name for the output file that captures the report results.

For more information on options to generate reports, see Optimization Report Options.