Introduction to Getting Faster PyTorch* Programs with TorchDynamo
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Learn the principles and techniques for making your PyTorch* programs faster, more useable, and performant with the framework’s easy-to-use, just-in-time (JIT) compiler, TorchDynamo.
Introduced in PyTorch 2.0, TorchDynamo transforms a general Python* program into a computational graph. It works by hooking into the Python frame-evaluation process, and then fundamentally changing and supercharging PyTorch operations at the compiler level in the back end.
This session presents this JIT graph compiler and includes:
- An overview of its design and use, including flexible support for graph acquisition with better performance and usability.
- The novel TorchDynamo technique principles in PyTorch 2.0, including debugging.
- How to use Intel® Extension for PyTorch* for TorchDynamo use.
- Intel’s support for and contributions to the tools.
This session includes demos.
Skill level: Intermediate
Featured Software
Get the following stand-alone versions of the tools:
Download Code Samples
More Resources
Intel is one of the largest contributors to PyTorch*, providing regular upstream optimizations to the PyTorch deep learning framework that provide superior performance on Intel® architectures. The AI Tools includes the latest binary version of PyTorch tested to work with the rest of the kit, along with Intel® Extension for PyTorch*, which adds the newest Intel optimizations and usability features.
You May Also Like
Related Articles