Visible to Intel only — GUID: GUID-96D2696A-BBFD-4869-B51B-926E31EA9300
Getting Help and Support
What's New
Notational Conventions
Related Information
Getting Started
Structure of the Intel® oneAPI Math Kernel Library
Linking Your Application with the Intel® oneAPI Math Kernel Library
Managing Performance and Memory
Language-specific Usage Options
Obtaining Numerically Reproducible Results
Coding Tips
Managing Output
Working with the Intel® oneAPI Math Kernel Library Cluster Software
Managing Behavior of the Intel® oneAPI Math Kernel Library with Environment Variables
Programming with Intel® Math Kernel Library in Integrated Development Environments (IDE)
Intel® oneAPI Math Kernel Library Benchmarks
Appendix A: Intel® oneAPI Math Kernel Library Language Interfaces Support
Appendix B: Support for Third-Party Interfaces
Appendix C: Directory Structure in Detail
Notices and Disclaimers
Using the /Qmkl Compiler Option
Using the /Qmkl-ilp64 Compiler Option
Automatically Linking a Project in the Visual Studio* Integrated Development Environment with Intel® oneAPI Math Kernel Library
Using the Single Dynamic Library
Selecting Libraries to Link with
Using the Link-line Advisor
Using the Command-Line Link Tool
OpenMP* Threaded Functions and Problems
Functions Threaded with Intel® Threading Building Blocks
Avoiding Conflicts in the Execution Environment
Techniques to Set the Number of Threads
Setting the Number of Threads Using an OpenMP* Environment Variable
Changing the Number of OpenMP* Threads at Run Time
Using Additional Threading Control
Calling oneMKL Functions from Multi-threaded Applications
Using Intel® Hyper-Threading Technology
Managing Multi-core Performance
Managing Performance with Heterogeneous Cores
Overview of the Intel® Distribution for LINPACK* Benchmark
Overview of the Intel® Optimized HPL-AI* Benchmark
Contents of the Intel® Distribution for LINPACK* Benchmark and the Intel® Optimized HPL-AI* Benchmark
Building the Intel® Distribution for LINPACK* Benchmark and the Intel® Optimized HPL-AI* Benchmark for a Customized MPI Implementation
Building the Netlib HPL from Source Code
Configuring Parameters
Ease-of-use Command-line Parameters
Running the Intel® Distribution for LINPACK* Benchmark and the Intel® Optimized HPL-AI* Benchmark
Heterogeneous Support in the Intel® Distribution for LINPACK* Benchmark
Environment Variables
Improving Performance of Your Cluster
Visible to Intel only — GUID: GUID-96D2696A-BBFD-4869-B51B-926E31EA9300
Examples of Linking for Clusters
This section provides examples of linking with ScaLAPACK, Cluster FFT, and Cluster Sparse Solver.
Note that a binary linked with the Intel® oneAPI Math Kernel Library (oneMKL) cluster function domains runs the same way as any other MPI application (refer to the documentation that comes with your MPI implementation).
For further linking examples, see the support website for Intel products at https://www.intel.com/content/www/us/en/developer/get-help/overview.html.