Developer Guide

Developer Guide for Intel® oneAPI Math Kernel Library Windows*

ID 766692
Date 10/31/2024
Public
Document Table of Contents

What You Need to Know Before You Begin Using the Intel® oneAPI Math Kernel Library

Mathematical problem

Identify all Intel® oneAPI Math Kernel Library (oneMKL) function domains that you require:

  • BLAS
  • Sparse BLAS
  • LAPACK
  • PBLAS
  • ScaLAPACK
  • Sparse Solver routines
  • Parallel Direct Sparse Solvers for Clusters
  • Vector Mathematics functions (VM)
  • Vector Statistics functions (VS)
  • Fourier Transform functions (FFT)
  • Cluster FFT
  • Trigonometric Transform routines
  • Poisson, Laplace, and Helmholtz Solver routines
  • Optimization (Trust-Region) Solver routines
  • Data Fitting Functions
  • Extended Eigensolver Functions

Reason: The function domain you intend to use narrows the search in the Intel® oneAPI Math Kernel Library (oneMKL) Developer Referencefor specific routines you need. Additionally, if you are using the Intel® oneAPI Math Kernel Library (oneMKL) cluster software, your link line is function-domain specific (seeWorking with the Intel® oneAPI Math Kernel Library Cluster Software). Coding tips may also depend on the function domain (see Other Tips and Techniques to Improve Performance).

Programming language

Intel® oneAPI Math Kernel Library (oneMKL) provides support for both Fortran and C/C++ programming. Identify the language interfaces that your function domains support (see Appendix A: Intel® oneAPI Math Kernel Library Language Interfaces Support).

Reason:Intel® oneAPI Math Kernel Library (oneMKL) provides language-specific include files for each function domain to simplify program development (seeLanguage Interfaces Support_ by Function Domain).

For a list of language-specific interface libraries and modules and an example how to generate them, see also Using Language-Specific Interfaces with Intel® oneAPI Math Kernel Library.

Range of integer data

If your system is based on the Intel 64 architecture, identify whether your application performs calculations with large data arrays (of more than 231-1 elements).

Reason: To operate on large data arrays, you need to select the ILP64 interface, where integers are 64-bit; otherwise, use the default, LP64, interface, where integers are 32-bit (see Using the ILP64 Interface vs).

Threading model

Identify whether and how your application is threaded:

  • Threaded with the Intel compiler
  • Threaded with a third-party compiler
  • Not threaded

Reason:The compiler you use to thread your application determines which threading library you should link with your application. For applications threaded with a third-party compiler you may need to use Intel® oneAPI Math Kernel Library (oneMKL) in the sequential mode (for more information, seeLinking with Threading Libraries).

Number of threads

If your application uses an OpenMP* threading run-time library, determine the number of threads you want Intel® oneAPI Math Kernel Library (oneMKL) to use.

Reason:By default, the OpenMP* run-time library sets the number of threads for Intel® oneAPI Math Kernel Library (oneMKL). If you need a different number, you have to set it yourself using one of the available mechanisms. For more information, seeImproving Performance with Threading.

Linking model

Decide which linking model is appropriate for linking your application with Intel® oneAPI Math Kernel Library (oneMKL) libraries:

  • Static
  • Dynamic

Reason: The link libraries for static and dynamic linking are different. For the list of link libraries for static and dynamic models, linking examples, and other relevant topics, like how to save disk space by creating a custom dynamic library, see Linking Your Application with the Intel® oneAPI Math Kernel Library.

MPI used

Decide what MPI you will use with the Intel® oneAPI Math Kernel Library (oneMKL) cluster software. You are strongly encouraged to use the latest available version of Intel® MPI.

Reason: To link your application with ScaLAPACK and/or Cluster FFT, the libraries corresponding to your particular MPI should be listed on the link line (see Working with the Intel® oneAPI Math Kernel Library Cluster Software).

Product and Performance Information

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.

Notice revision #20201201