Visible to Intel only — GUID: GUID-3A377FE5-55D9-4ACB-9A52-74A492FC9DDB
FPGA Optimization Guide for Intel® oneAPI Toolkits
Introduction To FPGA Design Concepts
Analyze Your Design
Optimize Your Design
FPGA Optimization Flags, Attributes, Pragmas, and Extensions
Quick Reference
Additional Information
Document Revision History for the FPGA Optimization Guide for Intel® oneAPI Toolkits
Refactor the Loop-Carried Data Dependency
Relax Loop-Carried Dependency
Transfer Loop-Carried Dependency to Local Memory
Minimize the Memory Dependencies for Loop Pipelining
Unroll Loops
Fuse Loops to Reduce Overhead and Improve Performance
Optimize Loops With Loop Speculation
Remove Loop Bottlenecks
Shannonization to Improve FMAX/II
Optimize Inner Loop Throughput
Improve Loop Performance by Caching On-Chip Memory
Global Memory Bandwidth Use Calculation
Manual Partition of Global Memory
Partitioning Buffers Across Different Memory Types (Heterogeneous Memory)
Partitioning Buffers Across Memory Channels of the Same Memory Type
Ignoring Dependencies Between Accessor Arguments
Contiguous Memory Accesses
Static Memory Coalescing
Specify Schedule FMAX Target for Kernels (-Xsclock=<clock target>)
Disable Burst-Interleaving of Global Memory (-Xsno-interleaving=<global_memory_type>)
Force Ring Interconnect for Global Memory (-Xsglobal-ring)
Force a Single Store Ring to Reduce Area (-Xsforce-single-store-ring)
Force Fewer Read Data Reorder Units to Reduce Area (-Xsnum-reorder)
Disable Hardware Kernel Invocation Queue (-Xsno-hardware-kernel-invocation-queue)
Modify the Handshaking Protocol Between Clusters (-Xshyper-optimized-handshaking)
Disable Automatic Fusion of Loops (-Xsdisable-auto-loop-fusion)
Fuse Adjacent Loops With Unequal Trip Counts (-Xsenable-unequal-tc-fusion)
Pipeline Loops in Non-task Kernels (-Xsauto-pipeline)
Control Semantics of Floating-Point Operations (-fp-model=<value>)
Modify the Rounding Mode of Floating-point Operations (-Xsrounding=<rounding_type>)
Global Control of Exit FIFO Latency of Stall-free Clusters (-Xssfc-exit-fifo-type=<value>)
Enable the Read-Only Cache for Read-Only Accessors (-Xsread-only-cache-size=<N>)
Control Hardware Implementation of the Supported Data Types and Math Operations (-Xsdsp-mode=<option>)
Visible to Intel only — GUID: GUID-3A377FE5-55D9-4ACB-9A52-74A492FC9DDB
task_sequence Use Cases
Using a task_sequence in your kernel enables a variety of design structures that you can implement. Common uses for the task_sequence class include executing multiple loops in parallel, as described in the following section.
Executing Multiple Loops in Parallel
Using the task_sequence class, you can run sequential loops in a pipelined manner within the context of the loop nest.
For example, in the following code sample, the first and second loops cannot be executed in parallel by the same invocation of the kernel they are contained in:
// first loop
for (int i = 0; i < n; i++) {
// Do something
}
// second loop
for (int i = 0; i < m; i++) {
// Do something else
}
// program scope
void firstLoop() {
for (int i = 0; i < n; i++) {
// Do something
}
}
void secondLoop() {
for (int i = 0; i < m; i++) {
// Do something else
}
}
// in kernel code
using namespace sycl::ext::intel::experimental;
task_sequence<firstLoop> firstTask;
task_sequence<secondLoop> secondTask;
firstTask.async();
secondTask.async();
firstTask.get();
secondTask.get();
Parent topic: System of Tasks Extension (task_sequence)