Visible to Intel only — GUID: yeo1585167950989
Ixiasoft
1. Introduction to Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide
2. Reviewing Your Kernel's report.html File
3. OpenCL Kernel Design Concepts
4. OpenCL Kernel Design Best Practices
5. Profiling Your Kernel to Identify Performance Bottlenecks
6. Strategies for Improving Single Work-Item Kernel Performance
7. Strategies for Improving NDRange Kernel Data Processing Efficiency
8. Strategies for Improving Memory Access Efficiency
9. Strategies for Optimizing FPGA Area Usage
10. Strategies for Optimizing Intel® Stratix® 10 OpenCL Designs
11. Strategies for Improving Performance in Your Host Application
12. Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide Archives
A. Document Revision History for the Intel® FPGA SDK for OpenCL™ Pro Edition Best Practices Guide
2.1. High-Level Design Report Layout
2.2. Reviewing the Summary Report
2.3. Viewing Throughput Bottlenecks in the Design
2.4. Using Views
2.5. Analyzing Throughput
2.6. Reviewing Area Information
2.7. Optimizing an OpenCL Design Example Based on Information in the HTML Report
2.8. Accessing HLD FPGA Reports in JSON Format
4.1. Transferring Data Via Intel® FPGA SDK for OpenCL™ Channels or OpenCL Pipes
4.2. Unrolling Loops
4.3. Optimizing Floating-Point Operations
4.4. Allocating Aligned Memory
4.5. Aligning a Struct with or without Padding
4.6. Maintaining Similar Structures for Vector Type Elements
4.7. Avoiding Pointer Aliasing
4.8. Avoid Expensive Functions
4.9. Avoiding Work-Item ID-Dependent Backward Branching
5.1. Best Practices for Profiling Your Kernel
5.2. Instrumenting the Kernel Pipeline with Performance Counters (-profile)
5.3. Obtaining Profiling Data During Runtime
5.4. Reducing Area Resource Use While Profiling
5.5. Temporal Performance Collection
5.6. Performance Data Types
5.7. Interpreting the Profiling Information
5.8. Profiler Analyses of Example OpenCL Design Scenarios
5.9. Intel® FPGA Dynamic Profiler for OpenCL™ Limitations
8.1. General Guidelines on Optimizing Memory Accesses
8.2. Optimize Global Memory Accesses
8.3. Performing Kernel Computations Using Constant, Local or Private Memory
8.4. Improving Kernel Performance by Banking the Local Memory
8.5. Optimizing Accesses to Local Memory by Controlling the Memory Replication Factor
8.6. Minimizing the Memory Dependencies for Loop Pipelining
8.7. Static Memory Coalescing
Visible to Intel only — GUID: yeo1585167950989
Ixiasoft
3.4.1. Trade-Off Between Initiation Interval and Maximum Frequency
The offline compiler attempts to achieve an II value of 1 for a given loop whenever possible. In some cases, the offline compiler might strive for an II of 1 at the expense of a reduced fMAX.
Consider the following example:
kernel void lowered_fmax (global int *dst, int N) {
int res = N;
#pragma unroll 9
for (int i = 0; i < N; i++) {
res += 1;
res ^= i;
}
dst[0] = res;
}
The following figure shows the datapath of the loop in kernel lowered_fmax. The loop is partially unrolled by a factor of 9, so the datapath contains nine copies of the original loop's body. To save space, only three of these copies are depicted in the following figure:
Figure 53. Datapath of the Partially Unrolled Loop in Kernel lowered_fmax
The loop in kernel lowered_fmax has a loop-carried dependence involving the res variable. This loop carried dependence forms a cycle in the loop's datapath, as shown in Datapath of the Partially Unrolled Loop in Kernel lowered_fmax .
Note: The value of res from one iteration must be available when the next iteration is launched. Therefore, if the loop is to achieve II=1, this cycle must contain at most one register. This cycle contains a chain of nine additions and XORs, so fMAX must be lowered in order for this chain of operations to complete within one clock cycle. The offline compiler may lower the kernel's fMAX to achieve II=1, since II is an important factor to achieving good performance. Consult the HTML report to find loops whose loop carried dependencies limit fMAX.