Intel® FPGA AI Suite: SoC Design Example User Guide

ID 768979
Date 9/06/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

8.1.2.3. Nios® V Stream Controller State Machine Buffer Flow

When the network is loaded into the coredla_device, if external streaming has been enabled, a connection to the Nios® V processor is created and an InitializeScheduler message is sent. This message resets the stream controller and sets the size of the raw input buffers and the drop/receive ratio of buffers from the input stream.

The inference application queries the plugin for the number of inference requests to create. When scheduled with the inference engine, these send ScheduleItem commands to the stream controller, and a corresponding CoreDlaJobItem is created. The CoreDlaJobItem keeps details of the input buffer address and size and has flags to indicate if it has a source buffer and to indicate if it has been scheduled for inference on the Intel® FPGA AI Suite IP. The CoreDlaJobItem instances are handled as if they are in a circular buffer.

When the Nios® V stream controller has received a ScheduleItem command from all of the inference requests and created a CoreDlaJobItem instance for each of them, it changes to a running state, which arms the mSGDMA stream to receive buffers, and sets a pointer pFillingImageJob that identifies which of the buffers is the next to be filled.

It then enters a loop, waiting for two types of event:

  • A buffer is received through the mSGDMA, which is detected by a callback from an ISR.
  • A message is received from the HPS.

New Buffer Received

The pFillingImageJob pointer is marked as now having a buffer.

If the next job in the circular buffer does not have a buffer, the pFillingImageJob pointer is moved on and the mSGDMA is armed again to receive the next buffer at the address of this next job.

If it does have a buffer, the Intel® FPGA AI Suite IP cannot keep up with the input buffer rate, so the pFillingImageJob does not move and the mSGDMA is armed to capture the next buffer at the same address. This means that the previous input buffer is dropped and is not processed by the Intel® FPGA AI Suite IP.

Buffers that have not been dropped can now be scheduled for inference on the Intel® FPGA AI Suite IP provided that the IP has fewer than two jobs in its pipeline.

Scheduling a job for execution means programming the CSR registers with the configuration address, the configuration size, and the input buffers address in DDR memory. This programming also sets the flag on the job so the controller knows that the job has been scheduled.

Message Received

If the message is a ScheduleItem message type then an inference request has been scheduled by the inference application.

This request happens only if a previous inference request has been completed and rescheduled. The number of jobs in the Intel® FPGA AI Suite IP pipeline has decreased by 1, so another job can potentially be scheduled for inference execution, providing it has an input buffer assigned.

If there are no jobs available with valid input buffers, then the Intel® FPGA AI Suite IP is processing buffers faster than they are being received by the mSGDMA stream, and consequently all input buffers are processed (that is, none are dropped).