Get Started with Intel® oneAPI Deep Neural Network Library
The Intel® oneAPI Deep Neural Network Library (oneDNN) is a performance library for deep learning applications. Deep learning application and framework developers can use oneDNN to improve application performance on Intel CPUs and GPUs.
The library includes basic building blocks for neural networks optimized for Intel® Architecture Processors and Intel® Processor Graphics.
The oneDNN library provides SYCL* API for Intel CPUs and GPUs.
Before You Begin
Before you get started with the library, see the following:
- Refer to the oneDNN System Requirements to make sure you have the necessary systems and software components.
- If you've installed the oneDNN standalone package, make sure you have the Intel oneAPI DPC++/C++ compiler installed. See Get Started with the DPC++/C++ compiler Intel® oneAPI DPC++/C++ Compiler for DPC++ Compiler requirements.
Build a Sample Application
Use the following sample projects to become familiar with the Intel® oneAPI Deep Neural Network Library:
Sample Name | Description |
getting_started | This C++ API example demonstrates the basics of the oneDNN programming model. |
sycl_interop_buffer and sycl_interop_usm | This C++ API example demonstrates programming with SYCL API in oneDNN. |
To understand the basics of the oneDNN programming model, you can quickly build the getting_started sample using the Intel oneAPI DPC++/C++ compiler.
Linux
- Set up the environment for Intel oneAPI development:
source /opt/intel/oneapi/setvars.sh
If you installed oneAPI in a non-default location, use the following command:
source ${ONEAPI_ROOT}/setvars.sh
where ${ONEAPI_ROOT} points to your installation location.
- Create a working directory.
- Copy oneDNN example programs from the oneAPI installation folder to the current working directory:
cp -r $DNNLROOT/share/doc/dnnl/examples .
where $DNNLROOT points to the subdirectory under the oneAPI_ROOT folder.
- Navigate to the examples directory:
cd examples
- Compile the getting_started.cpp file using the Intel oneAPI compiler and link the getting_started.cpp file with the oneDNN library:
icpx -fsycl getting_started.cpp -o getting_started -ldnnl
where
-fsycl: Enables SYCL support in the compiler.
-o getting_started: Specifies the output executable name.
- Run the compiled program targeting CPU as the execution device.
./getting_started cpu
- Run the compiled program targeting GPU as the execution device.
NOTE:Your system must include an Intel GPU and must be configured for GPU computation as specified in the oneAPI getting started guide.
./getting_started gpu
Windows
- Set up the environment for Intel oneAPI development:
C:\Program Files (x86)\Intel\oneAPI\setvars.bat
If you installed oneAPI in a non-default location, use the following command:
%ONEAPI_ROOT%\setvars.bat
where ONEAPI_ROOT is your installation folder.
- Create a working directory.
- Copy the example programs from the oneAPI installation directory to your current working directory:
xcopy /E "%DNNLROOT%\share\doc\dnnl\examples" examples
where %DNNLROOT% points to the subdirectory under the oneAPI_ROOT folder.
- Navigate to the examples folder inside your current working directory:
cd examples
- Compile the getting_started.cpp file using the Intel oneAPI compiler and link the getting_started.cpp file with the oneDNN library:
icx /EHa -fsycl getting_started.cpp dnnl.lib
where
-fsycl: Enables SYCL support in the compiler.
- Run the compiled program targeting CPU as the execution device:
getting_started.exe cpu
- Run the compiled program targeting GPU as the execution device:
NOTE:Your system must include an Intel GPU and must be configured for GPU computation as specified in the oneAPI getting started guide.
getting_started.exe gpu
See Programming Model to learn the typical workflow of the oneDNN library including Primitives, Engines, Streams, and Memory Objects.
Additional Information
Notices and Disclaimers
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.