Get Started with Intel® oneAPI Deep Neural Network Library

ID 767262
Date 3/31/2025
Public

Get Started with Intel® oneAPI Deep Neural Network Library

The Intel® oneAPI Deep Neural Network Library (oneDNN) is a performance library for deep learning applications. Deep learning application and framework developers can use oneDNN to improve application performance on Intel CPUs and GPUs.

The library includes basic building blocks for neural networks optimized for Intel® Architecture Processors and Intel® Processor Graphics.

The oneDNN library provides SYCL* API for Intel CPUs and GPUs.

Before You Begin

Before you get started with the library, see the following:

Build a Sample Application

Use the following sample projects to become familiar with the Intel® oneAPI Deep Neural Network Library:

Sample Name Description
getting_started

This C++ API example demonstrates the basics of the oneDNN programming model.

sycl_interop_buffer and sycl_interop_usm

This C++ API example demonstrates programming with SYCL API in oneDNN.

To understand the basics of the oneDNN programming model, you can quickly build the getting_started sample using the Intel oneAPI DPC++/C++ compiler.

Linux

  1. Set up the environment for Intel oneAPI development:
    source /opt/intel/oneapi/setvars.sh

    If you installed oneAPI in a non-default location, use the following command:

    source ${ONEAPI_ROOT}/setvars.sh

    where ${ONEAPI_ROOT} points to your installation location.

  2. Create a working directory.
  3. Copy oneDNN example programs from the oneAPI installation folder to the current working directory:
    cp -r $DNNLROOT/share/doc/dnnl/examples .

    where $DNNLROOT points to the subdirectory under the oneAPI_ROOT folder.

  4. Navigate to the examples directory:
    cd examples
  5. Compile the getting_started.cpp file using the Intel oneAPI compiler and link the getting_started.cpp file with the oneDNN library:
    icpx -fsycl getting_started.cpp -o getting_started -ldnnl

    where

    • -fsycl: Enables SYCL support in the compiler.

    • -o getting_started: Specifies the output executable name.

  6. Run the compiled program targeting CPU as the execution device.

    ./getting_started cpu

  7. Run the compiled program targeting GPU as the execution device.
    NOTE:
    Your system must include an Intel GPU and must be configured for GPU computation as specified in the oneAPI getting started guide.

    ./getting_started gpu

Windows

  1. Set up the environment for Intel oneAPI development:

    C:\Program Files (x86)\Intel\oneAPI\setvars.bat

    If you installed oneAPI in a non-default location, use the following command:

    %ONEAPI_ROOT%\setvars.bat

    where ONEAPI_ROOT is your installation folder.

  2. Create a working directory.
  3. Copy the example programs from the oneAPI installation directory to your current working directory:

    xcopy /E "%DNNLROOT%\share\doc\dnnl\examples" examples

    where %DNNLROOT% points to the subdirectory under the oneAPI_ROOT folder.

  4. Navigate to the examples folder inside your current working directory:

    cd examples

  5. Compile the getting_started.cpp file using the Intel oneAPI compiler and link the getting_started.cpp file with the oneDNN library:

    icx /EHa -fsycl getting_started.cpp dnnl.lib

    where

    • -fsycl: Enables SYCL support in the compiler.

  6. Run the compiled program targeting CPU as the execution device:
    getting_started.exe cpu
  7. Run the compiled program targeting GPU as the execution device:
    NOTE:
    Your system must include an Intel GPU and must be configured for GPU computation as specified in the oneAPI getting started guide.
    getting_started.exe gpu

See Programming Model to learn the typical workflow of the oneDNN library including Primitives, Engines, Streams, and Memory Objects.

NOTE:
You may also compile and link with the pkg-config tool.

Notices and Disclaimers

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.