Visible to Intel only — GUID: GUID-A0841DA0-D529-48A3-A699-B6F7AD28C4B6
Visible to Intel only — GUID: GUID-A0841DA0-D529-48A3-A699-B6F7AD28C4B6
Check How Assumed Dependencies Affect Modeling
If a loop has dependencies, it cannot be run in parallel and in most cases cannot be offloaded to the GPU. Intel Advisor can get the information about loop-carried dependencies from the following resources:
Using Intel® Compiler diagnostics. The dependencies are found at the compile time for some loops and the diagnostics are passed to the Intel Advisor using the integration with Intel Compilers.
Parsing the application call stack tree. If a loop is parallelized or vectorized on a CPU or is already offloaded to a GPU but executed on a CPU, Intel Advisor assumes that you resolved the loop-carried dependencies before parallelizing or offloading the loop.
Using the Dependencies analysis results. This analysis detects dependencies for most loops at run time, but a result might depend on an application workload. It also adds a high overhead making the application execute 5 - 100 times slower during the analysis. To reduce overhead, you can use various techniques, for example, mark up loops of interest.
For the Offload Modeling perspective. the Dependencies analysis is optional, but it might add important information about loop-carried dependencies Intel® Advisor to decide if a loop can be profitable to run on a graphics processing unit (GPU).
This topic describes a workflow that you can follow to understand if there are potential loop-carried dependencies in your code that might affect its performance on a target GPU.
Note: In the commands below, make sure to replace the myApplication with your application executable path and name before executing a command. If your application requires additional command line options, add them after the executable name.
Verify Assumed Dependencies
If you do not know what dependency types there are present in your application, run the Offload Modeling without the Dependencies analysis first to check if potential dependencies affect modeling results and to decide if you need to run the Dependencies analysis:
- Run the Offload Modeling without the Dependencies analysis.
- From GUI: Select Medium accuracy level and enable the Assume Dependencies option for the Performance Modeling in the Analysis Workflow tab. Run the perspective.
- From CLI: Run the following analyses, for example, using the advisor command line interface:
advisor --collect=survey --project-dir=./advi_results --static-instruction-mix -- ./myApplication
advisor --collect=tripcounts --project-dir=./advi_results --flop --stacks --enable-cache-simulation --target-device=xehpg_512xve --data-transfer=light -- ./myApplication
advisor --collect=projection --project-dir=./advi_results
- Open the generated report and go to the Accelerated Regions tab.
- In the Code Regions pane, expand the Measured column group and examine the Dependency Type column.
- You do not need to run the Dependencies analysis for loops with the following dependency types:
- Parallel: Programming Model dependency type means that the loop is uses SYCL, OpenCL™ or OpenMP* target programming model.
- Parallel: Explicit dependency type means that the loop is threaded and vectorized on CPU (for example, with OpenMP parallel for or Intel® oneAPI Threading Building Blocks parallel for).
- Parallel: Proven dependency type means that an Intel Compiler found no dependencies at the compile time.
- You might need to run the Dependencies analysis for loops that have the Dependency: Assumed dependency type. It means that the Intel Advisor does not have information about loop-carried dependencies for these loops and do not consider them as offload candidates.
- You do not need to run the Dependencies analysis for loops with the following dependency types:
- If you see many Dependency: Assumed types, rerun the performance modeling with assumed dependencies ignored, as follows:
- From GUI: Select only the Performance Modeling step in the Analysis Workflow tab and disable the Assume Dependencies option. Run the perspective.
- From CLI: Run the Performance Modeling with one of the following options
- Use --no-assume-dependencies to ignore assumed dependencies for all loops/functions. For example:
advisor --collect=projection --project-dir=./advi_results --no-assume-dependencies
- Use --set-parallel=[<loop-ID1>|<file-name1>:<line1>,<loop-ID2>|<file-name2>:<line2>,...] to ignore assumed dependencies for specific loops/functions only. Use this option if you know that some loops/functions have dependencies and you do not want to model them as parallel. For example:
advisor --collect=projection --project-dir=./advi_results --set-parallel=foo.cpp:34,bar.cpp:192
- Use --no-assume-dependencies to ignore assumed dependencies for all loops/functions. For example:
- Review the results generated to check if the potential dependencies might block offloading to GPU.
Loops that previously had Dependency: Assumed dependency type are now marked as Parallel: Assumed. Intel Advisor models their performance on the target GPU and checks potential offload profitability and speedup.
- Compare the program metrics calculated with and without assumed dependencies, such as speedup, number of offloads, and estimated accelerated time.
- If the difference is small, for example, 1.5x speedup with assumed dependencies and 1.6x speedup without assumed dependencies, you can skip the Dependencies analysis and rely on the current estimations. In this case, most loops with potential dependencies are not profitable to be offloaded and do not add much speedup to the application on the target GPU.
- If the difference is big, for example, 2x speedup with assumed dependencies and 40x speedup without assumed dependencies, you should run the Dependencies analysis. In this case, the information about loop-carried dependencies is critical for correct performance estimation.
Run the Dependencies Analysis
To check for real dependencies in your code, run the Dependencies analysis and rerun the Performance Modeling to get more accurate estimations of your application performance on GPU:
- From GUI:
- Enable only the Dependencies and Performance Modeling analyses in the Analysis Workflow tab.
By default, the generic markup strategy is applied to select only potentially profitable loops to run the Dependencies analysis.
- Rerun the perspective with only these two analyses enabled.
- Enable only the Dependencies and Performance Modeling analyses in the Analysis Workflow tab.
- From CLI:
- Run the Dependencies analysis for potentially profitable loops only:
advisor --collect=dependencies --select markup=gpu_generic --loop-call-count-limit=16 --filter-reductions --project-dir=./advi_results -- ./myApplication
- Run the Performance Modeling analysis:
advisor --collect=projection --project-dir=./advi_results
- Run the Dependencies analysis for potentially profitable loops only:
Open the result in the Intel Advisor, view the interactive HTML report, or print it to the command line. Continue to investigate the results and identify code regions to offload.