Visible to Intel only — GUID: GUID-E4C3A464-1E2D-4DAB-B166-1DDCD076980B
Visible to Intel only — GUID: GUID-E4C3A464-1E2D-4DAB-B166-1DDCD076980B
Instruction Set Specific Dispatching on Intel® Architectures
Intel® oneAPI Math Kernel Library automatically queries and then dispatches the code path supported on your Intel® processor to the optimal instruction set architecture (ISA) by default. The MKL_ENABLE_INSTRUCTIONS environment variable or the mkl_enable_instructions support function enables you to dispatch to an ISA-specific code path of your choice. For example, you can run the Intel® Advanced Vector Extensions (Intel® AVX) code path on an Intel processor based on Intel® Advanced Vector Extensions 2 (Intel® AVX2), or you can run the Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2) code path on an Intel AVX-enabled Intel processor. This feature is not available on non-Intel processors.
In some cases Intel® oneAPI Math Kernel Library also provides support for upcoming architectures ahead of hardware availability, but the library does not automatically dispatch the code path specific to an upcoming ISA by default. If for your exploratory work you need to enable an ISA for an Intel processor that is not yet released or if you are working in a simulated environment, you can use theMKL_ENABLE_INSTRUCTIONS environment variable or mkl_enable_instructions support function.
The following table lists possible values of MKL_ENABLE_INSTRUCTIONS alongside the corresponding ISA supported by a given processor. MKL_ENABLE_INSTRUCTIONSdispatches to the default ISA if the ISA requested is not supported on the particular Intel processor. For example, if you request to run the Intel AVX512 code path on a processor based on Intel AVX2, Intel® oneAPI Math Kernel Library runs the Intel AVX2 code path. The table also explains whether the ISA is dispatched by default on the processor that supports this ISA.
Value of |
ISA |
Dispatched by Default |
---|---|---|
AVX512 |
Intel® Advanced Vector Extensions (Intel® AVX-512) for systems based on Intel® Xeon® processors |
Yes |
AVX512_E1 |
Intel® Advanced Vector Extensions (Intel® AVX-512) with support for Vector Neural Network Instructions. |
Yes |
AVX512_E2 |
ICX: Intel® Advanced Vector Extensions (Intel® AVX-512) enabled processors. |
Yes |
AVX512_E3 |
Intel® Advanced Vector Extensions 512 (Intel® AVX-512) with support of Vector Neural Network Instructions supporting BF16 enabled processors. |
Yes |
AVX512_E4 |
Intel® Advanced Vector Extensions 512 (Intel® AVX-512) with Intel® Deep Learning Boost (Intel® DL Boost) and bfloat16 support and Intel® Advanced Matrix Extensions (Intel® AMX) with bfloat16 and 8-bit integer support. |
Yes |
AVX2 |
Intel® AVX2 |
Yes |
AVX2_E1 |
Intel® Advanced Vector Extensions 2 (Intel® AVX2) with support for Intel® Deep Learning Boost (Intel® DL Boost). |
Yes |
AVX |
Intel® AVX |
Yes |
SSE4_2 |
Intel® SSE4.2 |
Yes |
For more details about the mkl_enable_instructions function, including the argument values, see the Intel® oneAPI Math Kernel Library Developer Reference.
For example:
- To turn on automatic CPU-based dispatching of Intel AVX-512 with support of Intel DL Boost, bfloat16, Intel AMX with bfloat16 and 8-bit integer, and FP16 instruction, do one of the following:
Call
mkl_enable_instructions(AVX512_E4)
Set the environment variable:
For the bash shell:
export MKL_ENABLE_INSTRUCTIONS=AVX512_E4
For a C shell (csh or tcsh):
setenv MKL_ENABLE_INSTRUCTIONS AVX512_E4
- To configure the library not to dispatch more recent architectures than Intel AVX2, do one of the following:
Call
mkl_enable_instructions(MKL_ENABLE_AVX2)
Set the environment variable:
For the bash shell:
export MKL_ENABLE_INSTRUCTIONS=AVX2
For a C shell (csh or tcsh):
setenv MKL_ENABLE_INSTRUCTIONS AVX2
Settings specified by the mkl_enable_instructions function take precedence over the settings specified by the MKL_ENABLE_INSTRUCTIONS environment variable.
Product and Performance Information |
---|
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex. Notice revision #20201201 |