Developer Guide

Developer Guide for Intel® oneAPI Math Kernel Library Linux*

ID 766690
Date 6/24/2024
Public
Document Table of Contents

Building the Intel® Distribution for LINPACK* Benchmark and the Intel® Optimized HPL-AI* Benchmark for a Customized MPI Implementation

To build the binary, follow these steps:

  1. Specify the location of Intel® oneAPI Math Kernel Library (oneMKL) to be used (MKLROOT) .

  2. Set up your MPI environment.

  3. Run the following commands:

    $> export MKL_DIRS=${MKLROOT}/lib
    $> export MKL_LIBS="-L${MKL_DIRS} -Wl,-Bstatic -Wl,--start-group
       -lmkl_intel_lp64 -lmkl_sequential
       -lmkl_core -Wl,--end-group -Wl,-Bdynamic"
    $> mpicc -o xhpl -O2 -I${MKLROOT}/include HPL_main.c
       ${MKLROOT}/share/mkl/interfaces/mklmpi/mklmpi-impl.c
       libhpl_intel64.a ${MKL_LIBS} -ldl -lpthread -lm
    $> mpicc -o xhpl_gpu -O2 -I${MKLROOT}/include HPL_main.c
       ${MKLROOT}/share/mkl/interfaces/mklmpi/mklmpi-impl.c
       libhpl_intel64_gpu.a ${MKL_LIBS} -ldl -lpthread -lm
    $> mpicc -o xhpl-ai -O2 -I${MKLROOT}/include HPL_main.c
       ${MKLROOT}/share/mkl/interfaces/mklmpi/mklmpi-impl.c
       libhpl-ai_intel64.a ${MKL_LIBS} -ldl -lpthread -lm
    $> mpicc -o xhpl-ai_gpu -O2 -I${MKLROOT}/include HPL_main.c
       ${MKLROOT}/share/mkl/interfaces/mklmpi/mklmpi-impl.c
       libhpl-ai_intel64_gpu.a ${MKL_LIBS} -ldl -lpthread -lm