AI Analytics Part 2: Enhance Deep Learning Workloads on 3rd Generation Intel® Xeon® Scalable Processors
AI Analytics Part 2: Enhance Deep Learning Workloads on 3rd Generation Intel® Xeon® Scalable Processors
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
This webinar looks at AI Tools from the perspective of deep learning workloads, including the performance benefits and features that can enhance the deep learning training, inference, and workflows.
Join software engineer Louis Tsai to get insights into the latest optimizations for Intel® Optimization for TensorFlow* and Intel® Optimization for PyTorch*, which take advantage of the new acceleration instructions including Intel® Deep Learning Boost (Intel® DL Boost) and bfloat16 support from the 3rd Generation Intel® Xeon® Scalable processors.
Topics covered:
- How to quantize a model from FP32 and bfloat16 to int8 and analyze the performance speedup among different data types (FP32, bfloat16, and int8) in depth
- Model Zoo for Intel® architecture and low-precision tools included in AI Tools
- Efficiencies when building machine learning pipelines
Get the Software
Download AI Tools for Linux*.
Other Resources
- Get the Jupyter* Notebook in the first demonstration—Jupyter Notebook helps users analyze the performance benefit from using Intel Optimization for TensorFlow with the Intel® oneAPI Deep Neural Network Library (oneDNN) library.
- Read the latest AI Analytics blogs on Medium.
- Develop in the cloud—sign up for an Intel® Tiber™ Developer Cloud account, a free development sandbox with access to the latest Intel® hardware and oneAPI software.
- Subscribe to Code Together—an interview series that explores the challenges at the forefront of cross-architecture development. Each biweekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Listen and subscribe today.
- Alexa (Say “Alexa, play the podcast Code Together”)
- FeedBurner*
- iTunes*
- Spotify*
- Stitcher
- SoundCloud*
- TuneIn
Louie Tsai
Software engineer, Intel Corporation
Louie is part of the Technical Computing, Analyzers, and Runtimes group in Intel. He is responsible for driving customer engagements with and adoption for Intel® Performance Libraries, taking advantage of the synergies between Python* and the Intel® Math Kernel Library (Intel® MKL). In addition, Louie focuses on embedded applications, with particular focus on autonomous driving and helping customers optimize their deep learning workloads. Louie has a master’s degree in computer science and information engineering from National Chiao Tung University