Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

Developer Tools and Software for Intel® Data Center GPU Max Series

Drive breakthrough acceleration for HPC and AI workloads with the combined power of Intel® Data Center GPU Max Series and Intel® Xeon® Scalable processors—powered by oneAPI and Intel® AI developer tools.

  • Tools and Libraries
  • AI Workflows
  • HPC
  • Success Stories

Unleash the Power of Intel Data Center GPU Max Series through Software

Intel Data Center GPU Max Series combined with oneAPI helps developers deliver high-performance, cross-architecture applications and solutions. Intel toolkits provide tools, compilers, libraries, and AI middleware to unleash hardware performance while freeing developers from proprietary environments.

Convenient Software Suites for AI and HPC

Accelerate AI and HPC innovation with Intel's portfolio of compilers, libraries, and tools. Intel provides the software you need to solve the world's most demanding technical challenges.

The Intel® oneAPI Base Toolkit is a starting point for heterogeneous development across CPUs, GPUs, and FPGAs. It is open source, based on open standards, and features an industry-leading C++ compiler that implements SYCL*, an evolution of C++ for heterogeneous computing. A range of performance libraries provide portable acceleration. Enhanced profiling, design assistance, and debug tools are also included.

AI Tools include additional components for data scientists and AI developers with optimizations for popular AI frameworks to run on Intel Data Center GPU Max Series for training and inference.

The Intel® oneAPI HPC Toolkit delivers what developers need to build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization. Use it to build code with Intel C++ and Fortran compilers, scale with Intel® MPI library, and analyze MPI application behavior.

Get Started

GPU drivers must be installed first in order for the toolkits to be used on Intel Data Center GPU Max Series: 

  • Linux
  • Windows: not supported
Intel® oneAPI Base Toolkit

Use this core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures.

Download the Intel oneAPI Base Toolkit
Accelerate HPC with Intel® oneAPI HPC Toolkit
  • Optimize code and tune performance with Intel Fortran Compilers, C++, and SYCL, as well as oneAPI libraries, analysis, and porting tools. 
  • oneAPI compilers activate Intel® Xe Matrix Extensions (Intel® XMX) for acceleration.
  • Intel® MPI Library activates Intel® Xe Link for faster direct GPU-to-GPU communications.
Download the Intel oneAPI HPC Toolkit

                                                         

Boost Deep Learning Training and Inference with AI Tools
  • Intel® oneAPI Deep Neural Network Library (oneDNN) in the Intel oneAPI Base Toolkit uses Intel XMX to accelerate AI training and inference.
  • Streamline AI visual inferencing and deploy quickly using the Intel® Distribution of OpenVINO™ toolkit.
  • Intel® Extension for PyTorch* and Intel® Extension for TensorFlow* accelerate the use of popular deep learning frameworks for Intel CPUs and GPUs.
Download AI Tools

                                                         

Create Multiarchitecture Code Efficiently with Code Migration Tools

Migrate CUDA* code to C++ with SYCL for easy portability across multiple vendors’ architectures, including Intel® Data Center GPUs. The Intel® DPC++ Compatibility Tool, based on open source SYCLomatic, automates most of the process. 

Get the Intel® DPC++ Compatibility Tool

   

Open Source SYCLomatic

More Resources

Get Started Guides & Articles

oneAPI GPU Optimization Guide

Compare CPUs, GPUs, and FPGAs for oneAPI Compute Workloads

Intel® VTune™ Profiler

Intel Distribution of OpenVINO Toolkit Get Started Guide

Training, Webinars & Tutorials

Intel oneAPI 2023 Release: Preview the Tools

Tune Applications on CPUs & GPUs with an LLVM*-Based Compiler from Intel

Profile Heterogeneous Computing Performance with Intel VTune Profiler

Migrate CUDA* Code to SYCL

SYCL Origins: A True Standard with a Growing Ecosystem

Quicky Migrate Existing CUDA Code to SYCL

Intel DPC++ Compatibility Tool Get Started Guide

Migrating the MonteCarloMultiGPU from CUDA to SYCL

Show more Show less

AI Inference and Training Workflows

Intel Data Center GPU Max Series is ideal for AI inference and training workflows. AI Tools provide optimized extensions for AI frameworks such as TensorFlow* and PyTorch*. Optimize and deploy AI inference with the Intel® Distribution of OpenVINO™ toolkit.  

Get Started

The following Linux containers are part of the Intel® AI Reference Models project provided to quickly replicate the complete software environment that demonstrates the best-known performance of each of these target models or dataset combinations.

Intel® AI Reference Models

 

PyTorch Model Containers

ResNet* 50 Version 1.5 int8 Inference

(ImageNet 2012 dataset)

ResNet 50 Version 1.5 bfloat16 Training
(ImageNet 2012 dataset)

BERT Large FP16 Inference

(Stanford Question Answering [SQuAD] dataset)

BERT Large FP16 Training

(MLCommons dataset)

 

TensorFlow Model Containers

ResNet 50 Version 1.5 int8, FP16,and FP32 Inference

(ImageNet 2012 dataset)

ResNet 50 Version 1.5 bfloat16 Training

(ImageNet 2012 dataset)

BERT Large FP16, bfloat16, and FP32 Inference

(SQuAD dataset)

 

Additional Video and Coding Tutorials (Not Containerized)

Introduction to Intel Extension for PyTorch*

Intel® Extension for PyTorch* Getting Started Sample

PyTorch GPU Tutorial

Large Language Models (LLM)

Llama v2 Launch with Meta* AI

Intel® Extension for PyTorch* LLM Feature Get Started

Generative AI

Accelerate Stable Diffusion on Intel GPUs with Intel® Extension for OpenXLA*

Intel® Extension for OpenXLA (GitHub*)

Hugging Face Transformers

A broad set of more than 85 Hugging Face transformer training and inference models

Hugging Face Transformer Models

Install and Build Intel XPU Back End for NVIDIA Triton* Inference Server

Run Hugging Face Inductor Triton Benchmarks

Show more Show less

High-Performance Computing

Intel Data Center GPU Max Series is built for high-performance computing. Intel® HPC Toolkit is an add-on to the Intel® oneAPI Base Toolkit. Both work together to deliver what developers need to build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization.

Get Started

A wide variety of HPC applications and open source projects are tested on Intel Data Center GPU Max Series. Many are already optimized, and more optimizations are becoming available. Intel's combination of compilers, optimized libraries, porting tools, and contributions to open source projects helps you to quickly start your scientific discoveries.

The following recipes are a subset of HPC workloads enabled for Intel® Data Center GPU Max Series.

 

System Test
  • Stream Triad (BabelSTREAM)
  • DGEMM
Life Sciences
  • LAMMPS
  • AutoDock-GPU
Financial Services Industry
  • Binomial Options
  • Black-Scholes
  • Monte Carlo
Physics
  • DPEcho
Additional Video and Coding Tutorials
  • Quicky Migrate Existing CUDA Code to SYCL
  • Migrating the MonteCarloMultiGPU Sample from CUDA to SYCL
  • Port Thermal Solver Code
  • Offload Fortran Workloads
  • Offload Fortran Workloads to Intel® GPUs Using OpenMP*
  • Accelerating Lower-Upper (LU) Factorization Using Fortran, Intel® oneAPI Math Kernel Library & OpenMP to Intel GPUs

Success Stories

Intel® oneAPI Tools Help Prepare Code for Aurora


The Aurora Supercomputer from Argonne National Laboratory (built on Intel® architecture and HPE Cray supercomputer) will be one of the first exascale systems in the US.

Convergence of HPC, AI & Big Data Analytics in the Exascale Era

Intel® oneAPI Tools Help Prepare Code for Aurora
 

"We're seeing encouraging early application performance results on our development systems using Intel Max Series GPU accelerators – applications built with Intel's oneAPI compilers and libraries. For leadership-class computational science, we value the benefits of code portability from multivendor, multiarchitecture programming standards such as SYCL and Python AI frameworks such as PyTorch accelerated by Intel libraries. We look forward to the first exascale scientific discoveries from these technologies on the Aurora system next year."

— Timothy Williams, deputy director, Argonne Computational Science Division

Zuse Institute Berlin (ZIB) Ported easyWAVE Tsunami Simulation Application

Learn how porting from CUDA to oneAPI delivered performance on CPUs, GPUs, and FPGAs.

Chasing Exascale: TACC’s Frontera Uses oneAPI to Accelerate Scientific Insights

Dr. Dan Stanzione of Texas Advanced Computing Center (TACC) discusses advancing HPC to exascale with oneAPI and Intel multiarchitecture to scale workloads on the Frontera supercomputer.

1 Note All information provided is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

Intel® Developer Cloud

Intel® Data Center GPU Max Series is Available Now 

Build and optimize oneAPI multiarchitecture applications using the latest optimized Intel oneAPI and AI Tools, and test your workloads across Intel CPUs and GPUs. No hardware installations, software downloads, or configuration necessary. Free for 120 days with extensions possible.

Try Intel Tiber AI Cloud Today

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo