How to Use Intel’s New Built-in AI Acceleration Engines
How to Use Intel’s New Built-in AI Acceleration Engines
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
AI-augmented workloads can be demanding. To meet this challenge, Intel has engineered its latest CPUs and GPUs with built-in AI acceleration engines: Intel® Advanced Matrix Extensions (Intel® AMX) and Intel® Xe Matrix Extensions (Intel® XMX), respectively.
This session shows you what they are and how to take advantage of them to run tensor programming and expedite data processing, training, and inference.
What you will learn:
- How these AI accelerations engines boost tensor programming for applications that target the data center (CPU) as well as gaming, graphics, and video (GPU).
- How to invoke the Intel AMX and Intel XMX instruction sets through different levels of programming, which includes compiler intrinsics, DPC++ joint matrix abstraction, and Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Deep Neural Network Library (oneDNN) APIs.
- How different types of developers will benefit from these instructions sets.
Skill level: Intermediate
Featured Software
This session features tools that are available as part the Intel® oneAPI Base Toolkit. Or you can get them as stand-alone versions:
Explore the new CPUs and GPUs:
Related On-Demand Webinar & Video