Visible to Intel only — GUID: GUID-28F677CF-E404-4A8A-BD56-B9D3781579F5
Visible to Intel only — GUID: GUID-28F677CF-E404-4A8A-BD56-B9D3781579F5
Customize GPU Roofline Insights Perspective
Customize the perspective flow to better fit your goal and your application.
If you change any of the analysis settings from the Analysis Workflow tab, the accuracy level changes to Custom automatically. With this accuracy level, you can customize the perspective flow and/or analysis properties.
To change the properties of a specific analysis:
- Expand the analysis details on the Analysis Workflow pane with .
- Select desired settings.
- For more detailed customization, click the gear icon. You will see the Project Properties dialog box open for the selected analysis.
- Select desired properties and click OK.
For a full set of available properties, click the icon on the left-side pane or go to File > Project Properties.
The following tables cover project properties applicable to the analyses in the GPU Roofline Insights perspective.
Common Properties
Use This |
To Do This |
---|---|
Target type drop-down |
If you choose Attach to Process, you can either inherit settings from the Survey Hotspots Analysis Type or specify the needed settings. |
Inherit settings from Visual Studio project checkbox and field (Visual Studio* IDE only) |
Inherit Intel Advisor project properties from the Visual Studio* startup project (enable). If enabled, the Application, Application parameters, and Working directory fields are pre-filled and cannot be modified.
NOTE:
In Visual Studio* 2022, Intel Advisor provides lightweight integration. You can configure and compile your application and open the standalone Intel Advisor interface from the Visual Studio for further analysis. All your settings will be inherited by the standalone Intel Advisor project.
|
Application field and Browse... button |
Select an analysis target executable or script. If you specify a script in this field, consider specifying the executable in the Advanced > Child application field (required for Dependencies analysis). |
Application parameters field and Modify... button |
Specify runtime arguments to use when performing analysis (equivalent to command line arguments). |
Use application directory as working directory checkbox |
Automatically use the value in the Application directory to pre-fill the Working directory value (enable). |
Working directory field and Browse... button |
Select the working directory. |
User-defined environment variables field and Modify... button |
Specify environment variables to use during analysis. |
Managed code profiling mode drop-down |
|
Child application field |
Analyze a file that is not the starting application. For example: Analyze an executable (identified in this field) called by a script (identified in the Application field). Invoking these properties could decrease analysis overhead.
NOTE:
For the Dependencies Analysis Type: If you specify a script file in the Application field, you must specify the target executable in the Child application field. |
Modules radio buttons, field, and Modify... button |
Including/excluding modules could minimize analysis overhead. |
GPU kernels of interest field and Modify... button |
Analyze specific kernels only, minimizing analysis overhead. |
Use MPI launcher checkbox |
Generate a command line (enable) that appears in the Get command line field based on the following parameters:
|
Automatically stop collection after (sec) checkbox and field |
Stop collection after a specified number of seconds (enable and specify seconds). Invoking this property could minimize analysis overhead. |
Survey Analysis Properties
Use This |
To Do This |
---|---|
Automatically resume collection after (sec) checkbox and field |
Start running your target application with collection paused, then resume collection after a specified number of seconds (enable and specify seconds). Invoking this property could decrease analysis overhead.
TIP:
The corresponding CLI action option is --resume-after=<integer>, where the integer argument is in milliseconds, not seconds. |
Sampling Interval selector |
Set the wait time between each analysis collection CPU sample while your target application is running. Increasing the wait time could decrease analysis overhead. |
Collection data limit, MB selector |
Set the amount of collected raw data if exceeding a size threshold could cause issues. Not available for hardware event-based analyses. Decreasing the limit could decrease analysis overhead. |
Callstack unwinding mode drop-down list |
Set to After collection if:
Otherwise, set to During Collection. This mode improves stack accuracy but increases overhead. |
Stitch stacks checkbox |
Restore a logical call tree for Intel® oneAPI Threading Building Blocks (oneTBB) or OpenMP* applications by catching notifications from the runtime and attaching stacks to a point introducing a parallel workload (enable). Disable if Survey analysis runtime overhead exceeds 1.1x. |
Analyze MKL Loops and Functions checkbox |
Show Intel® oneAPI Math Kernel Library (oneMKL) loops and functions in Intel Advisor reports (enable). Enabling could increase analysis overhead. |
Analyze Python loops and functions checkbox |
Show Python* loops and functions in Intel Advisor reports (enable). Enabling could increase analysis overhead. |
Analyze loops that reside in non-executed code paths checkbox |
Collect a variety of data during analysis for loops that reside in non-executed code paths, including loop assembly code, instruction set architecture (ISA), and vector length (enable). Enabling could increase analysis overhead.
NOTE:
Analyzing non-executed code paths in binaries that target multiple ISAs (contain multiple code paths) is available only for binaries compiled using the -ax (Linux* OS) / Qax (Windows* OS) option with an Intel compiler. |
Enable registry spill/fill analysis checkbox |
Calculate the number of consecutive load/store operations in registers and related memory traffic (enable). Enabling could increase analysis overhead. |
Enable static instruction mix analysis checkbox |
Statically calculate the number of specific instructions present in the binary (enable). Enabling could increase analysis overhead. |
Source caching drop-down list |
|
Trip Counts and FLOP Analysis Properties
Use This |
To Do This |
---|---|
Inherit settings from the Survey Hotspots Analysis Type checkbox |
Copy similar settings from Survey analysis properties (enable). When enabled, this option disables application parameters controls. |
Automatically resume collection after (sec) checkbox and field |
Start running your target application with collection paused, then resume collection after a specified number of seconds (enable and specify seconds). Invoking this property could decrease analysis overhead.
TIP:
The corresponding CLI action option is --resume-after=<integer>, where the integer argument is in milliseconds, not seconds. |
Collect information about Loop Trip Counts checkbox |
Measure loop invocation and execution (enable). |
Collect information about FLOP, L1 memory traffic, and AVX-512 mask usage checkbox |
Measure floating-point operations, integer operations, and memory traffic (enable). |
Collect stacks checkbox |
Collect call stack information when performing analysis (enable). Enabling could increase analysis overhead. |
Capture metrics for dynamic loops and functions checkbox |
Collect metrics for dynamic Just-In-Time (JIT) generated code regions. |
Enable Memory-Level Roofline with cache simulation checkbox |
Model multiple levels of cache for data, such as counts of loaded or stored bytes for each loop, to plot the Roofline chart for all memory levels (enable). Enabling could increase analysis overhead.
NOTE:
This option is applicable to CPU Roofline only.
|
Cache simulator configuration field |
Specify a cache hierarchy configuration to model (enable and specify hierarchy).
NOTE:
This option is applicable to CPU Roofline only.
The hierarchy configuration template is: [num_of_level1_caches]:[num_of_ways_level1_connected]:[level1_cache_size]:[level1_cacheline_size]/ [num_of_level2_caches]:[num_of_ways_level2_connected]:[level2_cache_size]:[level2_cacheline_size]/ [num_of_level3_caches]:[num_of_ways_level3_connected]:[level3_cache_size]:[level3_cacheline_size] For example: 4:8w:32k:64l/4:4w:256k:64l/1:16w:6m:64l is the hierarchy configuration for:
|