Video and Image Processing Suite
The Intel FPGA Video and Image Processing Suite is a collection of Intel FPGA intellectual property (IP) functions that you can use to facilitate the development of custom video and image processing designs. These Intel FPGA IP functions are suitable for use in a wide variety of image processing and display applications, such as studio broadcast, video conferencing, AV networking, medical imaging, smart city/retail, and consumer.
Video and Image Processing Suite
The Video and Vision Processing Suite is the next-generation suite of IPs for video, image and vision processing. The IPs transport video using the Intel FPGA streaming video protocol, which uses the industry-standard AXI4-Stream protocol. A protocol converter IP allows interoperability with the Avalon Streaming video standard and the existing Video and Image Processing Suite IP or other IPs compliant with the Avalon streaming video protocol.
The Video and Image Processing Suite features cores that range from simple building block functions, such as color space conversion to sophisticated video scaling functions that can implement programmable polyphase scaling.
- All the VIP cores use an open, low-overhead Avalon® Streaming (Avalon-ST) interface standard so that they can be easily connected.
- You can use VIP cores to quickly build a custom video processing signal chain using the Intel® Quartus® Prime Lite or Standard Edition software and the associated Platform Designer.
- You can mix and match video and image processing cores with your own proprietary IP.
- You can use the Platform Designer to automatically integrate embedded processors and peripherals and generate arbitration logic.
- Capable of supporting 8K video at 60 fps and beyond.
Features
Video and Image Processing Suite Intel FPGA IP Functions
Intel FPGA IP Function |
Description |
---|---|
Implements a 3x3, 5x5, or 7x7 finite impulse response (FIR) filter on an image data stream to smooth or sharpen images. |
|
Mixes and blends multiple image streams—useful for implementing text overlay and picture-in-picture mixing. |
|
Captures video data packets without adding additional delays and connect to trace system IP for collecting video trace data. |
|
Removes and repairs the non-ideal sequences and error cases present in the incoming data stream to produce an output stream that complies with the implicit ideal use model. |
|
Changes the sampling rate of the chroma data for image frames, for example from 4:2:2 to 4:4:4 or 4:2:2 to 4:2:0. |
|
Provide a way to clip video streams and can be configured at compile time or at run time. |
|
The Clocked Video Interface IP cores convert clocked video formats (such as BT656, BT1120, and DVI) to Avalon-ST video; and vice versa. |
|
Changes how color plane samples are transmitted across the Avalon-ST interface. This function can be used to split and join video streams, giving control over the routing of color plane samples. |
|
Convert image data between a variety of different color spaces such as RGB to YCrCb. |
|
Configurable Guard Bands |
The Configurable Guard Bands IP core compares each color plane in the input video stream to upper and lower guard bands values. |
Synchronizes the changes made to the video stream in real time between two functions. |
|
Converts interlaced video formats to progressive video format using a motion adaptive deinterlacing algorithm. Also supports "bob" and "weave" algorithms, low-angle edge detection, 3:2 cadence detection, and low latency. |
|
Buffer video frames into external RAM. This core supports double or triple-buffering with a range of options for frame dropping and repeating. |
|
Reads video from external memory and outputs it as a stream. |
|
Allows video streams to be corrected for the physical properties of display devices. |
|
Converts progressive video to interlaced video by dropping half the lines of incoming progressive frames. |
|
HDL code-based Scaler II Intel FPGA IP function uses less area than first-generation Scaler in the Video and Image Processing Suite while delivering higher performance. The Scaler II function further reduces required resources with new support of 4:2:2 chroma data sampling rate. Both linear and polyphase algorithms are available with new feature of edge adaptive algorithm to reduce blurriness while maintaining realism. |
|
Allow video streams to be switched in real time. |
|
Generate a video stream that contains still color bars for use as a test pattern. |
|
Monitors captured data from video monitor and connects to host System Console via JTAG or USB for display. |
Getting Started
Design Examples and Development Kits
The following design examples are available for you to run on the development kits.
Product Name |
Supported Devices/Development Kit |
Daughtercard |
Platform Designer Compliant |
Provider |
---|---|---|---|---|
✓ |
Intel |
|||
None |
✓ |
ALSE |
||
None |
✓ |
Terasic |
||
✓ |
Intel |
Video Tutorials
IP Quality Metrics
Basics |
|
---|---|
Year IP was first released |
2009 |
Latest version of Intel® Quartus® software supported |
18.1 |
Status |
Production |
Deliverables |
|
Customer deliverables include the following: Design file (encrypted source code or post-synthesis netlist) Simulation model for ModelSim*-Intel® FPGA Edition Timing and/or layout constraints Testbench or design example Documentation with revision control Readme file |
Yes Yes Yes Yes Yes No |
Any additional customer deliverables provided with IP |
None |
Parameterization GUI allowing end user to configure IP |
Yes |
IP core is enabled for Intel FPGA IP Evaluation Mode Support |
Yes |
Source language |
Verilog |
Testbench language |
Verilog |
Software drivers provided |
sw.tcl file |
Driver operating system (OS) support |
N/A |
Implementation |
|
User interface |
Clocked Video (into Clocked Video Input and out of Clocked Video Output), Avalon®-ST (all other datapaths) |
IP-XACT metadata |
No |
Verification |
|
Simulators supported |
ModelSim, VCS, Riviera-PRO, NCSim |
Hardware validated |
Arria® II GX/GZ, Arria® V, Intel® Arria® 10, Cyclone® IV ES/GX, Cyclone® V, Intel® Cyclone® 10, Intel® MAX® 10, Stratix® IV, Stratix® V |
Industry standard compliance testing performed |
No |
If Yes, which test(s)? |
N/A |
If Yes, on which Intel FPGA device(s)? |
N/A |
If Yes, date performed |
N/A |
If No, is it planned? |
N/A |
Interoperability |
|
IP has undergone interoperability testing |
Yes |
If yes, on which Intel FPGA device(s) |
Intel Arria 10, Intel Cyclone 10 |
Interoperability reports available |
N/A |
Gamma Corrector
The Gamma Corrector is used when you need to constrain pixel values to specific ranges based on information about the display it is going to be sent to. Some displays have a nonlinear response to the voltage of a video signal, and as a result a remapping of pixel values becomes necessary to correct the display. The Gamma Corrector uses an Avalon®-MM interface look-up table to provide mapping of pixel values to their altered values.
An example of the Gamma Corrector is shown where a Y'CbCr input with 8-bit color values ranging from 0 to 255 being passed through the Gamma Corrector which then remaps the values to fit within the range of 16 to 240, and is sent to a Clocked Video Output.
2D FIR Filter
The 2D finite impulse response (FIR) filter video intellectual property (IP) core is used to process color planes serially and pass the pixel values through a FIR filter. The coefficients are input through an Avalon Memory Mapped (Avalon-MM) interface which can be interfaced by a Nios® II processor or through other peripherals accessing the Qsys design containing the video datapath.
An example block diagram using the 2D FIR filter is shown with a Clocked Video Input with RGB color planes formatted serially in order to pass through the FIR filter. Once the filtering is done, the Color Plane Sequencer is used to reformat the color planes from three planes in serial to three planes in parallel. With three color planes in parallel the video frame is ready to be transmitted externally through the Clocked Video Output core.
Alpha Blending Mixer and Mixer II
The Alpha Blending Mixer and Mixer II cores provide the ability to mix up to 12 or 4 image layers respectively and are runtime controllable through an Avalon-MM interface. Accessing from a Nios II processor through the Avalon-MM interface, you can dynamically control the location of each layer displayed and the order in which the layers are overlaid (Mixer I only). The alpha blending feature of the Mixer I supports the display of transparent or semi-transparent pixels (Mixer I only).
The Mixer II core includes a built in test pattern generator to use as a background layer. This is an added benefit as one of the four inputs does not need to be from a test pattern generator core. Another benefit of Mixer II is its ability to support 4K video.
An example block diagram of how the Mixer cores are used is shown with a clocked video input providing the active video feed on input 0, a background layer provided by the built-in Test Pattern Generator and a Frame Reader core that is reading static graphics like a company logo on input 1. These feeds are mixed together to provide a display of a video image with graphics and a background provided by the test pattern generator.
It is recommended that Mixer inputs are fed directly from a frame buffer unless it is certain that the inputs’ and output’s respective frame rates and offsetting of the input layers will not result in data starvation and consequent lock-up of video.
Chroma Resampler
The Chroma Resampler is used to change chroma formats of video data. Video transmitted in Y'CbCr color space can subsample the Cb and Cr color components in order to save on data bandwidth. The Chroma Resampler provides the ability to go between 4:4:4, 4:2:2, and 4:2:0 formats.
An example shows a Clocked Video Input with Y'CbCr in 4:2:2 chroma format being upscaled by the Chroma Resampler to 4:4:4 format. This upscaled video format is then passed to a Color Space Converter which converts the video format from Y'CbCr to RGB to be sent out to the Clocked Video Output core.
Clipper II
The Clipper core is used when you want to take fixed areas of a video feed to be passed onward. The Clipper core can be configured during compilation or updated through an Avalon-MM interface from a Nios II processor or another peripheral. The Clipper has the ability to set the clipping method by either offsets from the edges or by a fixed rectangle area.
An example shows two instances of the Clipper taking 400 x 400 pixel areas from their respective video inputs. These two clipped video feeds are then mixed together in a Mixer core along with other graphics and the built-in test pattern generator as a background. The Mixer has the ability to adjust the location of the video inputs, so you could position the two clipped video feeds side-by-side with the addition of frame buffers if necessary.
Clocked Video Input and Output Cores (I and II)
The Clocked Video Input and Output cores are used to capture and transmit video in various formats such as BT656 and BT1120.
Clocked Video Input cores convert incoming video data into Avalon Streaming (Avalon-ST) video formatted packet data, removing incoming horizontal and vertical blanking and retaining only active picture data. The core allows you to capture video at one frequency and pass on the data to the rest of your Qsys system which can be run at the same or another frequency.
An example of a Clocked Video Input is shown feeding video into a scaler block to upscale from 1280 x 720 to 1920 x 1080, after which it is sent to a Clocked Video Output core. If both input and output have the same frame rate, FIFOs in the Clocked Video Input and Clocked Video Output can be created to allow conversion to take place without a frame buffer.
Color Plane Sequencer
The Color Plane Sequencer is used to rearrange the color plane elements in a video system. It can be used to convert color planes from series to parallel transmission (or visa-versa), to “duplicate” video channels (such as might be required to drive a secondary video monitor sub-system) or to “split” video channels (such as may be required to separate an alpha plane from three RGB planes output as 4 planes from a frame reader).
An example of the Color Plane Sequencer is shown with the 2D FIR filter video IP core which requires video to be input and output with the color planes in series. To transmit video out to the Clocked Video Output in the desired format, the color planes must be converted to parallel by the Color Plane Sequencer.
Color Space Converter (I and II)
The Color Space Converter cores (CSC and Color Space Converter II) are used when you must convert between RGB and Y'CrCb color space formats. Depending on your video input and output format requirements, you may have to convert between different color formats.
An example of a Color Space Converter is shown with a Chroma Resampler upscaling Y'CrCb video and then it is passed to the Color Space Converter and converted into RGB color format to be sent to a clocked video output.
Control Synchronizer
The Control Synchronizer is used in conjunction with an Avalon-MM master controller, such as a Nios II processor or other peripheral. The Control Synchronizer is used to synchronize runtime configuration changes in one or more video IP blocks in alignment with the video data as it is changing. Some configuration changes can happen upstream from a video IP core while video frames are still passing through it in the previous format. In order to make the transition seamless and avoid glitching on screen, the Control Synchronizer is used to align the configuration switch-over exactly as the new incoming video frame data is arriving at the core.
An example of the Control Synchronizer is shown with a Nios II processor configuring a Test Pattern Generator to change the frame size from 720p to 1080p. The Control Synchronizer receives the notification from the Nios II processor that video frame data will be changing soon, but holds off from reconfiguring the Clocked Video Output until the new frames pass through the Frame Buffer to the Control Synchronizer. The Control Synchronizer reads the control data packets of the frame to determine if it corresponds to the new configuration, and then updates the Clocked Video Output core to the new settings, making the resolution change on the video output seamless.
Deinterlacer (I and II) and Broadcast Deinterlacer
The Deinterlacer cores (Deinterlacer, Deinterlacer II and Broadcast Deinterlacer) convert interlaced video frames into progressive scan video frames. There are multiple algorithms for how to deinterlace video to choose from, depending on the desired quality, logic area used and available external memory bandwidth.
An example of how the Deinterlacer core is used is shown with a Clocked Video Input receiving interlaced frames and passing through the Deinterlacer, which transacts with an external memory and Frame Buffer core. After deinterlacing the video into progressive scan format, it is sent out through a Clocked Video Output core.
Frame Buffer (I and II)
The Frame Buffer and Frame Buffer II cores are used to buffer progressive and interlaced video fields and can support double or triple buffering with a range of options for dropping and repeating frames. In cases such as deinterlacing video, changing the frame rate of video, or sometimes mixing of video, a Frame Buffer is necessary.
An example of how the Frame Buffer is used is shown with a case where a Clocked Video Input core is receiving video at 30 frames per second (fps), and needs to convert it to 60 fps. The Frame Buffer core is used to buffer multiple frames and supports repeating frames, so the frame rate is able to be converted to 60 fps and is then transmitted out through a Clocked Video Output core.
Frame Reader
The Frame Reader core is used to read video frames stored in external memory and outputs them as an Avalon-ST video stream. The data is stored as raw video pixel values only.
An example is shown using the Frame Reader to get company logo graphics to overlay on another video stream and merging the layers together through a Mixer core. From there the merged video is sent out to a Clocked Video Output core. The mixer can optionally be configured to include an alpha channel. In this case the frame reader could be configured to read three color planes and one alpha plane, which could be “split” out using a color space converter (not shown) before being input to the Mixer.
Scaler II
The Scaler II core is used to scale a video frame up or down in size. It supports multiple algorithms including nearest neighbor, bilinear, bicubic, and polyphase/Lanczos scaling. On-chip memory is used for buffering video lines used for scaling, with higher scaling ratios requiring more storage.
An example of the Scaler II core is shown taking a 720p video frame size from a Clocked Video Input and scaling it to 1080p and sending to a Clocked Video Output.
Switch (I and II)
The Switch cores allow users to connect up to twelve input video streams to up to twelve output video streams. The Switch does not merge or duplicate the video streams, but allows you to change the routing from input port to output port. It's not necessary to connect all output ports unless you want to be able to still monitor those video streams. Control of the Switch is done through an Avalon-MM interface accessible by a Nios II processor or another Avalon-MM mapped peripheral.
An example of the Switch is shown with a Clocked Video Input and a Test Pattern Generator feeding two ports on a Switch. The second Switch output port is left unconnected, and the Nios II processor controls which of the two feeds is sent to the port connected to the Clocked Video Output for display.
Test Pattern Generator II
The Test Pattern Generator core allows you to generate a number of images to quickly test your video interface. The core is configurable for many different image sizes, as well as RGB and YCbCr color formats.
You can use a Test Pattern Generator core along with a Clocked Video Output core to quickly get your system's video interface verified. With your desired video specifications in hand, completing a design takes only minutes to quickly validate the interface is able to generate an image on an external display.
Avalon-ST Video Monitor
The Avalon-ST Video Monitor is a core that can be inserted in series with your video datapath that reads Avalon-ST video packet information and provides diagnostic data to the Trace System. The Video Monitor is inserted where you want to probe the video datapath for analysis and statistics information. When combined with the Trace System core and connected externally through a debug port such as JTAG or through an Intel FPGA Download Cable, you can get greater visibility into the video system's behavior. You can use System Console as the virtual platform to display this information.
An example shows the Avalon-ST Video Monitor inserted before and after a Color Plane Sequencer. These are used to monitor video packet information coming from the Clocked Video Output and from the Color Plane Sequencer. The Video Monitor does not alter the video data as it is passed through the core. The Video Monitors are connected to the Trace System, which is accessed via JTAG in this case.
Trace System
The Trace System is used to access the Avalon-ST Video Monitor cores inserted in a design for video diagnostic information. Multiple Video Monitor cores can be used to connect to a Trace System controller. The Trace System connects to a host using a debug interface typically like a JTAG connector or Intel FPGA Download Cable interface if available.
An example shows the Trace System used with a couple of Avalon-ST Video Monitor cores inserted before and after a Color Plane Sequencer. The Video Monitors are connected to the Trace System, which is accessed via JTAG in this case.
Additional Resources
Find IP
Find the right Altera® FPGA Intellectual Property core for your needs.
Technical Support
For technical support on this IP core, please visit Support Resources or Intel® Premier Support. You may also search for related topics on this function in the Knowledge Center and Communities.
IP Evaluation and Purchase
Evaluation mode and purchasing information for Altera® FPGA Intellectual Property cores.
IP Base Suite
Free Altera® FPGA IP Core licenses with an active license for Quartus® Prime Standard or Pro Edition Software.
Design Examples
Download design examples and reference designs for Altera® FPGA devices.
Contact Sales
Get in touch with sales for your Altera® FPGA product design and acceleration needs.