Use this article as a guidance resource for modeling performance of your application on GPU platforms for further optimization of your code. This article highlights performance modeling capabilities of Intel® Advisor Offload Modeling perspective.
Intel Advisor is available for download as a standalone installation and as part of Intel® oneAPI Base Toolkit.
Offload Modeling perspective provides performance modeling capabilities to identify code regions that are profitable to offload to a GPU device, estimate an expected speed-up of your code executed on a target GPU, pinpoint performance bottlenecks, and estimate overhead for offloading, data transfer, and scheduling region execution on a target device.
Understand Offload Modeling
Learn what Offload Modeling perspective is and how it works using the following resources:
- Learn the basics of Offload Modeling perspective in the Intel Advisor Get Started: Identify High-impact Opportunities to Offload to a GPU.
- Video: Efficiently Offload to GPUs Using Intel® Advisor.
Explore More Offload Modeling Capabilities
- View the detailed description of Offload Modeling perspective in the Intel Advisor User Guide: Offload Modeling Perspective.
- Video: Offload Your Code form CPU to GPU… and Optimize It!
Offload Modeling Use Cases
View step-by-step guides for the most common Offload Modeling usage scenarios in the Intel Advisor Cookbook:
- Identify Code Regions to Offload to GPU and Visualize GPU Usage
- Estimate the C++ Application Speedup on a Target GPU
- Use Intel Advisor Command Line Interface to Model GPU Performance
- Model GPU Application Performance for a Different GPU Device
Next Steps
- After running Offload Modeling Perspective, consider analyzing your GPU-bound application using GPU Roofline perspective. View Roofline Resources page to learn more about identifying CPU- and GPU-imposed performance ceilings in your applications.
- Video: Heterogeneous Performance Analysis Using Intel® Analysis Tools. Explore how you can use Intel Advisor and Intel® VTune™ Profiler to identify regions of your CPU code that are the best candidates for offloading to a GPU and predict their performance.
- Learn more about improving performance of application running on a GPU in the oneAPI GPU Optimization Guide.
Notices and Disclaimers
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.