Intel® Extension for PyTorch*: New Features on CPUs and GPUs
Intel® Extension for PyTorch*: New Features on CPUs and GPUs
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Intel® Extension for PyTorch* is a plug-in to PyTorch that provides further optimizations and features when run on Intel hardware, including CPUs and GPUs. Few code changes are needed to take full advantage of the available optimizations.
This session focuses on new and experimental features of Intel’s latest PyTorch extension, especially those supported in PyTorch 2.0 (released March 2023), which optimize models and runtime and further enable developers to take full advantage of Intel hardware capabilities.
Key topics covered:
- New features that provide additional optimizations using the extension’s back end for torch.compile(), codeless optimization, and fast BERT
- Graph capture to automatically generate a graph model from TorchScript trace and TorchDynamo
- HyperTune for quantizing models that have high accuracy loss when quantized using other methods
- New features in the extension’s launch script, including specifying numa nodes, use of P-cores only, and distributed training options
- For GPUs, distributed training using distributed data parallel (DDP) and Horovod*
The session includes a demo.
Skill level: Intermediate
Featured Software
Download Code Samples
Intel is one of the largest contributors to PyTorch*, providing regular upstream optimizations to the PyTorch deep learning framework that provide superior performance on Intel® architectures. The AI Tools includes the latest binary version of PyTorch tested to work with the rest of the kit, along with Intel® Extension for PyTorch*, which adds the newest Intel optimizations and usability features.