P-Tile Avalon® Streaming Intel® FPGA IP for PCI Express* User Guide

ID 683059
Date 6/26/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

6. Testbench

This chapter introduces the testbench for an Endpoint design example and a test driver module. You can create this design example using design flows described in Quick Start Guide chapter of the P-Tile Avalon streaming Intel FPGA IP for PCI Express Design Example User Guide.

The testbench in this design example simulates up to a Gen4 x16 variant.

When configured as an Endpoint variation, the testbench instantiates a design example with a P-Tile Endpoint and a Root Port BFM containing a second P-Tile (configured as a Root Port) to interface with the Endpoint. The Root Port BFM provides the following functions:

  • A configuration routine that sets up all the basic configuration registers in the Endpoint. This configuration allows the Endpoint application to be the target and initiator of PCI Express transactions.
  • A Verilog HDL procedure interface to initiate PCI Express* transactions to the Endpoint.

This testbench simulates the scenario of a single Root Port talking to a single Endpoint.

The testbench uses a test driver module, altpcietb_bfm_rp_gen4_x16.sv, to initiate the configuration and memory transactions. At startup, the test driver module displays information from the Root Port and Endpoint Configuration Space registers, so that you can correlate to the parameters you specified using the Parameter Editor.

Note: Standalone designs do not support the Root Port BFM. Only P-tile Endpoint design examples generated by the IP support this BFM. For additional information about the P-tile design examples, refer to P-Tile Avalon Streaming Intel FPGA IP for PCI Express Design Example User Guide.
Note: The Intel testbench and Root Port BFM provide a simple method to do basic testing of the Application Layer logic that interfaces to the variation. This BFM allows you to create and run simple task stimuli with configurable parameters to exercise basic functionality of the Intel example design. The testbench and Root Port BFM are not intended to be a substitute for a full verification environment. Corner cases and certain traffic profile stimuli are not covered. Refer to the items listed below for further details. To ensure the best verification coverage possible, Intel suggests strongly that you obtain commercially available PCI Express* verification IP and tools, in combination with performing extensive hardware testing.

Your Application Layer design may need to handle at least the following scenarios that are not possible to create with the Intel testbench and the Root Port BFM, or are due to the limitations of the example design:

  • It is unable to generate or receive Vendor Defined Messages. Some systems generate Vendor Defined Messages. The Hard IP block simply passes these messages on to the Application Layer. Consequently, you should make the decision, based on your application, whether to design the Application Layer to process them.
  • It can only handle received read requests that are less than or equal to the currently set Maximum payload size option specified under the Device tab under the PCI Express/PCI Capabilities GUI using the parameter editor. Many systems are capable of handling larger read requests that are then returned in multiple completions.
  • It always returns a single completion for every read request. Some systems split completions on every 64-byte address boundary.
  • It always returns completions in the same order the read requests were issued. Some systems generate the completions out-of-order.
  • It is unable to generate zero-length read requests that some systems generate as flush requests following some write transactions. The Application Layer must be capable of generating the completions to the zero-length read requests.
  • It uses fixed credit allocation.
  • It does not support parity.
  • It does not support multi-function designs.
  • It incorrectly responds to Type 1 vendor-defined messages with CplD packets.