Developer Guide

Developer Guide for Intel® oneAPI Math Kernel Library Windows*

ID 766692
Date 6/24/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

Reproducibility Conditions

To get reproducible results from run to run, ensure that the number of threads is fixed and constant. Specifically:

  • If you are running your program with OpenMP* parallelization on different processors, explicitly specify the number of threads.
  • To ensure that your application has deterministic behavior with OpenMP* parallelization and does not adjust the number of threads dynamically at run time, set MKL_DYNAMIC and OMP_DYNAMIC to FALSE. This is especially needed if you are running your program on different systems.
  • If you are running your program with the Intel® Threading Building Blocks parallelization, numerical reproducibility is not guaranteed.

Strict CNR Mode

In strict CNR mode, oneAPI Math Kernel Library provides bitwise reproducible results for a limited set of functions and code branches even when the number of threads changes. These routines and branches support strict CNR mode (64-bit libraries only):

  • ?gemm, ?symm, ?hemm, ?trsm, and their CBLAS equivalents (cblas_?gemm, cblas_?symm, cblas_?hemm, and cblas_?trsm.
  • Intel® Advanced Vector Extensions 2 (Intel® AVX2) or Intel® Advanced Vector Extensions 512 (Intel® AVX-512).

When using other routines or CNR branches,oneAPI Math Kernel Library operates in standard (non-strict) CNR mode, subject to the restrictions described above. Enabling strict CNR mode can reduce performance.

Reproducibility Conditions for Intel® GPUs

When CNR mode is enabled (any code branch other than OFF), oneMKL provides bitwise-reproducible results for a limited set of functions on Intel® GPUs:

  • BLAS level-3 routines (gemm, symm, hemm, syrk, herk, syr2k, her2k, trmm, trsm), all precisions
  • BLAS level-3 extensions (batched versions of the above, and gemmt), all precisions

Both the OpenMP* offload APIs and the DPC++ APIs support GPU CNR mode.

Reproducibility is guaranteed to run when the code is running on the same GPU, or between two GPUs with identical product names (for example, Intel® Arc™ A770).

NOTE:
  • As usual, you should align your data, even in CNR mode, to obtain the best possible performance. While CNR mode also fully supports unaligned input and output data, the use of it might reduce the performance of some oneAPI Math Kernel Library functions on earlier Intel processors. Refer to coding techniques that improve performance for more details.

  • Conditional Numerical Reproducibility does not ensure that bitwise-identical NaN values are generated when the input data contains NaN values.

  • If dynamic memory allocation fails on one run but succeeds on another run, you may fail to get reproducible results between these two runs.

Product and Performance Information

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.

Notice revision #20201201