Visible to Intel only — GUID: gmw1597765152886
Ixiasoft
Product Discontinuance Notification
1. Introduction to Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide
2. Reviewing Your Kernel's report.html File
3. OpenCL Kernel Design Concepts
4. OpenCL Kernel Design Best Practices
5. Profiling Your Kernel to Identify Performance Bottlenecks
6. Strategies for Improving Single Work-Item Kernel Performance
7. Strategies for Improving NDRange Kernel Data Processing Efficiency
8. Strategies for Improving Memory Access Efficiency
9. Strategies for Optimizing FPGA Area Usage
10. Strategies for Optimizing Intel® Stratix® 10 OpenCL Designs
11. Strategies for Improving Performance in Your Host Application
12. Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide Archives
A. Document Revision History for the Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide
2.1. High-Level Design Report Layout
2.2. Reviewing the Summary Report
2.3. Viewing Throughput Bottlenecks in the Design
2.4. Using Views
2.5. Analyzing Throughput
2.6. Reviewing Area Information
2.7. Optimizing an OpenCL Design Example Based on Information in the HTML Report
2.8. Accessing HLD FPGA Reports in JSON Format
4.1. Transferring Data Via Intel® FPGA SDK for OpenCL™ Channels or OpenCL Pipes
4.2. Unrolling Loops
4.3. Optimizing Floating-Point Operations
4.4. Allocating Aligned Memory
4.5. Aligning a Struct with or without Padding
4.6. Maintaining Similar Structures for Vector Type Elements
4.7. Avoiding Pointer Aliasing
4.8. Avoid Expensive Functions
4.9. Avoiding Work-Item ID-Dependent Backward Branching
5.1. Best Practices for Profiling Your Kernel
5.2. Instrumenting the Kernel Pipeline with Performance Counters (-profile)
5.3. Obtaining Profiling Data During Runtime
5.4. Reducing Area Resource Use While Profiling
5.5. Temporal Performance Collection
5.6. Performance Data Types
5.7. Interpreting the Profiling Information
5.8. Profiler Analyses of Example OpenCL Design Scenarios
5.9. Intel® FPGA Dynamic Profiler for OpenCL™ Limitations
8.1. General Guidelines on Optimizing Memory Accesses
8.2. Optimize Global Memory Accesses
8.3. Performing Kernel Computations Using Constant, Local or Private Memory
8.4. Improving Kernel Performance by Banking the Local Memory
8.5. Optimizing Accesses to Local Memory by Controlling the Memory Replication Factor
8.6. Minimizing the Memory Dependencies for Loop Pipelining
8.7. Static Memory Coalescing
Visible to Intel only — GUID: gmw1597765152886
Ixiasoft
5.5. Temporal Performance Collection
During the run of your host application, the Profiler collects performance counter data at a given sample rate n. After n cycles, the Profiler collects performance counter data and outputs to the profile.mon monitor file.
- You can control the rate at which the Profiler counters are sampled by setting the Profiler Runtime Wrapper’s -period flag. The specified period is the minimum number of kernel pipeline clock cycles between profiling samples. If you do not set a period, the default behavior is to profile as often as possible.
Note: For particularly large or long running designs, the amount of data generated by the default temporal period might result in a very large profile.mon and profile.json file. To reduce this file size, either increase the sampling period or turn off temporal profiling.
- To turn off temporal profiling and instead collect performance data only after a kernel has finished executing, you can set the Profiler Runtime Wrapper’s -no-temporal flag.
- The Profiler does not automatically collect the profiling information for autorun kernels if you disable temporal profiling, since autorun kernels never finish. You can use the host API call clGetProfileDataDeviceIntelFPGA to obtain profiling data from autorun kernels. For more information about triggering profiling using your host application, refer to Collecting Profile Data During Kernel Execution in the Intel FPGA SDK for OpenCL Pro Edition: Programming Guide.
Note: If you collect the performance data only at the end of execution, the data is an average representation of the kernel’s overall execution.