Visible to Intel only — GUID: GUID-A158C313-875C-416D-95C2-B1B0A880080E
Why is FPGA Compilation Different?
Types of SYCL* FPGA Compilation
FPGA Compilation Flags
Emulate and Debug Your Design
Evaluate Your Kernel Through Simulation
Device Selectors for FPGA
FPGA IP Authoring Flow
Fast Recompile for FPGA
Generate Multiple FPGA Images (Linux only)
FPGA BSPs and Boards
Targeting Multiple Homogeneous FPGA Devices
Targeting Multiple Platforms
FPGA-CPU Interaction
FPGA Performance Optimization
Use of RTL Libraries for FPGA
Use SYCL Shared Library With Third-Party Applications
FPGA Workflows in IDEs
Intel oneAPI DPC++ Library (oneDPL)
Intel oneAPI Math Kernel Library (oneMKL)
Intel oneAPI Threading Building Blocks (oneTBB)
Intel oneAPI Data Analytics Library (oneDAL)
Intel oneAPI Collective Communications Library (oneCCL)
Intel oneAPI Deep Neural Network Library (oneDNN)
Intel oneAPI Video Processing Library (oneVPL)
Other Libraries
Visible to Intel only — GUID: GUID-A158C313-875C-416D-95C2-B1B0A880080E
Pipelined Kernels
By default, SYCL* task kernels are not pipelined. They must execute in a back-to-back manner. You must wait for the previous invocation to finish before invoking the kernel again.
However, streaming kernels can be optionally pipelined by using the streaming_pipelined_interface macro, as shown in the following example:
struct MyIP { conduit int *input; MyIP(int *inp_a_) : input(inp_a_) {} streaming_pipelined_interface void operator()() const { int temp = *input; *input = something_complicated(temp); } }; /* To exercise the pipelined nature of the kernel in simulation, you must queue up multiple instances of the functions before you call the wait() function. The following code example shows how to exercise a pipelined kernel: */ for (int i = 0; i < kN; i++) { q.single_task(MyIP{&input_array[i]}); } q.wait();