AI Performance Tuning Guide: Maximize the Potential of Intel® Hardware
AI Performance Tuning Guide: Maximize the Potential of Intel® Hardware
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Common challenges for AI developers are:
- Speeding up big networks to achieve real-time performance.
- Efficiently using hardware resources to save time and compute power.
- Debugging the AI “black box” to identify which API calls are consuming the highest amount of resources.
This webinar addresses all three issues using just two tools: Intel® Extension for PyTorch* and Intel® VTune™ Profiler.
Key topics covered:
- How to use the PyTorch extension to access the latest hardware optimizations and enhance an AI application with minimal code changes (applied to a real-world example)
- How to further tune an application using hardware and software configurations; the result is more economical use of hardware resources
- How to use TorchServe with Intel Extension for PyTorch for efficiently serving and scaling PyTorch models
- How Intel VTune Profiler can find hot spots in your code and recommend ways to fix them and further optimize the application
Includes a live demo.
Skill level: Intermediate
Featured Software
- Get the Intel Extension of PyTorch as a stand-alone component from GitHub* or as part of the AI Tools.
- Get Intel VTune Profiler as a stand-alone component or as part of the Intel® oneAPI Base Toolkit.
Download Code Samples
Find and fix performance bottlenecks and optimize application and system performance and system configuration for HPC, cloud, IoT, media, storage, and more.
Intel is one of the largest contributors to PyTorch*, providing regular upstream optimizations to the PyTorch deep learning framework that provide superior performance on Intel® architectures. The AI Tools includes the latest binary version of PyTorch tested to work with the rest of the kit, along with Intel® Extension for PyTorch*, which adds the newest Intel optimizations and usability features.