Accelerate AI Workloads with Intel® Optimization for PyTorch*
Overview
Deep learning is massively popular in scientific computing, with deep learning algorithms used by industries to solve complex, computationally sophisticated problems in real time. Find out how tools optimized with Intel oneAPI can boost training and inference performance of big models.
This workshop introduces Intel® Extension for PyTorch* (part of Intel® Optimization for PyTorch*), which extends stock PyTorch with optimizations for an extra performance boost on Intel architecture.
While most of these optimizations will eventually be included in stock Python*, the extension purpose is to deliver timely, up-to-date features and optimizations for PyTorch on Intel hardware.
This workshop provides:
- An introduction to the framework, including the optimizations Intel contributes to PyTorch
- Hands-on practice on Intel® Developer Cloud with a demonstration of using Intel Extension for PyTorch to accelerate a convolutional neural network (CNN) model on Intel platforms
- Instructions for installing Intel Extension for PyTorch and applying special techniques to achieve an AI performance boost
Highlights
00:00 Introductions
1:53 Overview of the workshop
3:38 Intel Optimization for PyTorch
8:17 An overview of the AI Tools
14:21 The main ways to optimize PyTorch
15:55 An overview of PyTorch
21:06 What components does operator optimization include?
21:30 Vectorization
24:37 Parallelization
25:55 Memory layout
29:00 Low-precision optimization
30:12 How to fully take advantage of bfloat16 features
32:26 Low-precision optimization with int8
33:27 Quantization features to take advantage of
34:34 Quantization workflow and API
36:54 Introduction to graph optimization
37:00 Operator fusion
38:08 FP32 and bfloat16 fusion patterns
38:56 Constant folding
40:00 Introduction to runtime extension and motivation
41:40 Example of PyTorch that uses a runtime extension API
44:26 How a multistream module can reduce core traffic
45:03 Case study
45:23 Introduction to a BERT model
46:22 Performance boost results for the Intel Optimization for PyTorch
47:52 How to install Intel Extension for PyTorch
52:00 Hands-on session
Accelerate data science and AI pipelines-from preprocessing through machine learning-and provide interoperability for efficient model development.
You May Also Like
Related Articles & Blogs