PyTorch* Prerequisites for Intel® GPUs

ID 827139
Updated 6/25/2025
Version
Public

Overview

This guide provides instructions for installing the prerequisites needed to run and build PyTorch* 2.8 on Intel® GPUs.

If you are compiling and using PyTorch 2.7, see the prerequisite instructions specific to PyTorch 2.7.

Developers who want to run PyTorch deep learning workloads need to install only the drivers and pip install PyTorch wheels binaries. The runtime package for the Intel® Deep Learning Essentials is installed automatically during the pip installation of the PyTorch wheels binaries.

Developers building PyTorch from source code need to install both the driver and Intel Deep Learning Essentials.

If you have access to an Intel GPU, use the following instructions to choose the appropriate installation method:

You can also access the Intel® Data Center GPU Max Series through Intel® Tiber™ AI Cloud:

  1. Register and sign in to the Intel Tiber AI Cloud.
  2. From the Learning section, select Notebook, and then select the AI with Max Series GPU filter.
  3. To open the notebook, launch PyTorch on an Intel GPU notebook.
  4. Select the latest PyTorch kernel for the notebook.

AnchorIntel GPU Driver Installation

Intel® Data Center GPUs

The operating system is verified for Intel Data Center GPUs.

GPU Red Hat* Enterprise Linux* 9.2 SUSE Linux Enterprise Server* 15 SP5 Ubuntu* Server 22.04 (>= 5.15 LTS kernel)
Intel® Data Center GPU Max Series (Formerly Code Named Ponte Vecchio) Yes Yes Yes

 

The Intel Data Center GPU Installation Instructions describe how to install software for Intel Data Center GPU Max Series systems, as well as compute and media runtimes and development packages.

Optionally, follow these instructions to verify expected Intel GPU hardware is working.

Driver Installation for Client GPUs from Intel

We recommend installing and using the latest drivers to ensure optimal performance and compatibility for your hardware.

Hardware Verified with Windows* 11 and Ubuntu 24.10

  • Intel® Arc™ A-Series Graphics (Formerly Code Named Alchemist)
  • Intel Arc B-Series Graphics (Formerly Code Named Battlemage)
  • Intel® Core™ Ultra Processors with Intel Arc Graphics (Formerly Code Named Meteor Lake - H)
  • Intel Core Ultra Desktop Processors (Series 2) with Intel Arc Graphics (Formerly Code Named Lunar Lake)
  • Intel Core Ultra Mobile Processors (Series 2) with Intel Arc Graphics (Formerly Code Named Arrow Lake - H)

Hardware Verified with Ubuntu 24.04

  • Intel Arc A-Series Graphics (Formerly Code Named Alchemist)
  • Intel Core Ultra Processors with Intel Arc Graphics (Formerly Code Named Meteor Lake - H)
  • Intel Core Ultra Desktop Processors (Series 2) with Intel Arc Graphics (Formerly Code Named Lunar Lake)
  • Intel Core Ultra Processors Series 2 with Intel Arc Graphics (Formerly Code Named Arrow Lake - H)

Note Intel Arc B-Series graphics are not supported with Ubuntu 24.04.

Refer to the client GPU installation instructions for installing the Intel GPU drivers with specific guidance for Ubuntu 24.04 & Ubuntu 24.10. Be sure to follow all the instructions, including selecting the right release stream and adding your user to the render node group.

Optionally, follow these instructions to verify expected Intel GPU hardware is working.

Note Windows support for Intel Arc B-Series graphics is experimental.

To download and run the installer to update your WHQL certified graphics driver to version 32.0.101.6739 or higher, follow the instructions in the Intel Iris Xᵉ Graphics for Windows documentation. Include the LevelZeroSDK in the installation package for torch.compile use on Windows and enabling Kineto* when building from the source.

AnchorIntel® Deep Learning Essentials Installation

To build PyTorch, you need to install Intel Deep Learning Essentials. Confirm that your system meets the requirements and choose the appropriate installation method from the following instructions.

Intel Deep Learning Essentials Installation for Intel Data Center GPUs

Instead of using a package manager, you can install Intel Deep Learning Essentials using offline installation scripts. Each installation script is a file containing all the needed files with a script that extracts and installs the development package.

Important Use sudo to install files in system directories so they're available globally. Without sudo, files are installed in the current user's home directory.

  1. Make sure the necessary tools are available:
    sudo apt update sudo apt install -y wget

     

  2. Download the Intel Deep Learning Essentials offline installation script and install:
    wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/3435dc45-055e-4f7a-86b1-779931772404/intel-deep-learning-essentials-2025.1.3.7_offline.sh sudo sh ./intel-deep-learning-essentials-2025.1.3.7_offline.sh -a --silent --eula accept

Configuration for Intel Deep Learning Essentials

Use this command to configure environment variables, important folders, and command settings.

source /opt/intel/oneapi/compiler/latest/env/vars.sh source /opt/intel/oneapi/umf/latest/env/vars.sh source /opt/intel/oneapi/pti/latest/env/vars.sh source /opt/intel/oneapi/ccl/latest/env/vars.sh source /opt/intel/oneapi/mpi/latest/env/vars.sh

Consider adding these commands to your ~/.bashrc file so they run every time you sign in or create a new shell session.

Intel Deep Learning Essentials Installation for Client GPUs from Intel

 

Note If you are using Ubuntu 24.10, the default GNU Compiler Collection (GCC)*/G++ version 14.2 is too high for compiling PyTorch. For a successful compilation, downgrade the GCC/G++ version to 13.

  1. Make sure the necessary tools to add repository access are available:
    sudo apt update sudo apt install -y gpg-agent wget gnupg

     

  2. Download the APT repository’s public key and put it into the /usr/share/keyrings directory:
    # download the key to system keyring wget -qO- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB | sudo gpg --dearmor -o /usr/share/keyrings/oneapi-archive-keyring.gpg # add signed entry to apt sources and configure the APT client to use Intel repository: echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" | sudo tee /etc/apt/sources.list.d/oneAPI.list

     

  3. Update the APT package list and repository index:
    sudo apt update

     

  4. Use APT to install Intel Deep Learning Essentials:
    sudo apt install intel-deep-learning-essentials-2025.1

Configure Intel Deep Learning Essentials Environment Variables

Note If pip was used to install the PyTorch wheel binaries, the Intel Deep Learning Essentials environment variables are already properly configured. You can skip this configuration step.

Use this command to configure environment variables, important folders, and command settings:

source /opt/intel/oneapi/compiler/latest/env/vars.sh source /opt/intel/oneapi/pti/latest/env/vars.sh source /opt/intel/oneapi/umf/latest/env/vars.sh source /opt/intel/oneapi/ccl/latest/env/vars.sh source /opt/intel/oneapi/mpi/latest/env/vars.sh

Consider adding these commands to your ~/.bashrc file so they run every time you sign in or create a new shell session.

 

1