Visible to Intel only — GUID: ibt1661605752742
Ixiasoft
1. FPGA AI Suite PCIe-based Design Example User Guide
2. About the PCIe* -based Design Example
3. Getting Started with the FPGA AI Suite PCIe* -based Design Example
4. Building the FPGA AI Suite Runtime
5. Running the Design Example Demonstration Applications
6. Design Example Components
7. Design Example System Architecture for the Agilex™ 7 FPGA
A. FPGA AI Suite PCIe-based Design Example User Guide Archives
B. FPGA AI Suite PCIe-based Design Example User Guide Document Revision History
5.1. Exporting Trained Graphs from Source Frameworks
5.2. Compiling Exported Graphs Through the FPGA AI Suite
5.3. Compiling the PCIe* -based Example Design
5.4. Programming the FPGA Device ( Agilex™ 7)
5.5. Performing Accelerated Inference with the dla_benchmark Application
5.6. Running the Ported OpenVINO™ Demonstration Applications
Visible to Intel only — GUID: ibt1661605752742
Ixiasoft
5.6.1. Example Running the Object Detection Demonstration Application
You must download the following items:
- yolo-v3-tf from the OpenVINO™ Model Downloader. The command should look similar to the following command:
python3 <path_to_installation>/open_model_zoo/omz_downloader \ --name yolo-v3-tf \ --output_dir <download_dir>
From the downloaded model, generate the .bin/.xml files:python3 <path_to_installation>/open_model_zoo/omz_converter \ --name yolo-v3-tf \ --download_dir <download_dir> \ --output_dir <output_dir> \ --mo <path_to_installation>/model_optimizer/mo.py
Model Optimizer generates an FP32 version and an FP16 version. Use the FP32 version.
- Input video from: https://github.com/intel-iot-devkit/sample-videos.
- The recommended video is person-bicycle-car-detection.mp4
To run the object detection demonstration application,
- Ensure that demonstration applications have been built with the following command:
build_runtime.sh -target_de10_agilex -build-demo
- Ensure that the FPGA has been configured with the Generic bitstream.
- Run the following command:
./runtime/build_Release/object_detection_demo/object_detection_demo \ -d HETERO:FPGA,CPU \ -i <path_to_video>/input_video.mp4 \ -m <path_to_model>/yolo_v3.xml \ -arch_file=$COREDLA_ROOT/example_architectures/AGX7_Generic.arch \ -plugins $COREDLA_ROOT/runtime/plugins.xml \ -t 0.65 \ -at yolo
Tip: High-resolution video input, such as when using HD camera as input, imposes considerable decoding overhead on the inference engine that can potentially lead to reduced system throughput. Use the the -input_resolution=<width>x<height> option that is included in the demonstration application to adjust the input resolution to a level that balances video quality with system performance.