Intel® Advisor User Guide

ID 766448
Date 10/31/2024
Public
Document Table of Contents

Offload Modeling Accuracy Presets

For each perspective, Intel® Advisor has several levels of collection accuracy. Each accuracy level is a set of analyses and properties that control what data is collected and the level of collection details. The higher accuracy value you choose, the higher runtime overhead is added.

The following accuracy levels are available:

Comparison / Accuracy Level

Low

Medium

High

Overhead

5 - 10x

15 - 50x

50 - 80x

Goal

Model performance of an application that is mostly compute bound and does not have dependencies

Model application performance considering memory traffic for all cache and memory levels

Model application performance with all potential limitations for offload candidates

Analyses

Survey + Characterization (Trip Counts and FLOP) + Performance Modeling with no assumed dependencies

Survey + Characterization (Trip Counts and FLOP with cache simulation for the selected target device, callstacks, and light data transfer simulation) + Performance Modeling with no assumed dependencies

Survey + Characterization (Trip Counts and FLOP with cache simulation for the selected target device, callstacks, and medium data transfer simulation) + Dependencies + Performance Modeling with assumed dependencies

Result

Basic Offload Modeling report that shows potential speedup and performance metrics estimated on a target considering memory traffic from execution units to L1 cache only. The result might be inaccurate for memory-bound applications.

Offload Modeling report extended with data transfers estimated between host and device platforms considering memory traffic for all cache and memory levels

Offload Modeling report with detailed data transfer estimations and automated check for loop-carried dependencies for more accurate search for the most profitable regions to offload

You can choose custom accuracy and set a custom perspective flow for your application. For more information, see Customize Offload Modeling Perspective.

NOTE:
There is a variety of techniques available to minimize data collection, result size, and execution overhead. Check Minimize Analysis Overhead.