Optimize the Latest Deep Learning Workloads Using PyTorch* Optimized by Intel
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
For developers focused on deep learning use cases—predictive modeling, recommendation systems, natural language processing, object detection, and tons more—it is paramount to extract the most workload performance using newer technologies like bfloat16, graph-level optimizations, and custom kernels.
This session focuses on the performance and ease-of-use benefits for deep learning training and inference of large models like deep learning recommendation model (DLRM) using Intel® Extension for PyTorch* and Intel® oneAPI Deep Neural Network Library (oneDNN).
Join senior deep learning engineer, Eikan Wang to learn more about the following topics:
- Using oneDNN to deliver optimal training and inference workload performance for the PyTorch* framework on Intel hardware
- oneDNN-based graph optimizations and custom kernel implementations to boost performance of DLRM modules in PyTorch
- How the extension library for PyTorch can be dynamically loaded as a Python module to offer a more modular design for custom compound operations that are critical to accelerating key deep learning modules, for example, the interaction module from DLRM.
Get the Software
- Get the Intel Extension for PyTorch as part of the Intel® AI Analytics Toolkit.
- Get oneDNN as part of the Intel® oneAPI Base Toolkit. (Want this tool stand-alone only? Get it here.)
Other Resources
- Sign up for an Intel® Developer Cloud for oneAPI account—a free development sandbox with access to the latest Intel hardware and oneAPI software.
- Explore oneAPI, including developer opportunities and benefits
- Subscribe to Code Together—an interview series that explores the challenges at the forefront of cross-architecture development. Each bi-weekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Available wherever you get your podcasts.
Eikan Wang
Senior deep learning engineer, Intel Corporation
Eikan is part of the Graphics and Software group where he is the technical lead on PyTorch framework optimization for Intel architecture. He is also one of the major contributors to low-precision inference solutions on Intel architecture. He has four years of full-stack experience in AI from various AI applications to framework, library, and compiler optimizations. Eikan received his bachelor’s degree in mathematics from Huaiyin Institute of Technology.
Develop high-performance, data-centric applications for CPUs, GPUs, and FPGAs with this core set of tools, libraries, and frameworks including LLVM*-based compilers.
Related Video