FPGAi
Accelerate AI enablement of your custom solution. Learn how FPGAi empowers you to enable and deploy AI in your custom solution to achieve low latency, energy efficiency and agility needed for continuous innovation.
Leading a New Era in AI with Altera FPGAs
Fusing FPGAs and AI is not just an evolution — it's a revolution.
Altera is delivering the new era of FPGAi with high-performance and AI-infused FPGA fabric. Our tightly integrated programmable logic and AI enable real-time data adaptation and autonomous decision-making, equipping systems to achieve next-gen complexities.
Why FPGAs for AI
AI workloads are becoming more complex and demanding, so choosing the right hardware for AI acceleration is critical. Learn how FPGAs can offer powerful and flexible solutions for AI applications.
Why FPGAs Are Good for Implementing Edge AI and Machine Learning Applications
Read the emerging use cases of FPGA-based AI inference in the edge and custom AI applications and Intel’s software and hardware solutions for edge FPGA AI.
FPGA Vs. GPU for Deep Learning
While no single architecture works best for all machine and deep learning applications, FPGAs can offer distinct advantages over GPUs and other types of hardware.
Quantized Neural Networks for FPGA Inference
Low-precision quantization for neural networks supports AI application specifications by providing greater throughput for the same footprint or reducing resource usage.
Partners Accelerating AI at the Edge
Watch these videos to learn how Altera’s partners can help you accelerate AI workloads on FPGAs.
FPGAi Applications
Edge AI
FPGAs are especially suited for edge AI in various industrial, medical, test and measurement, aerospace, defense, and broadcast applications. Integrated Arm processors, Nios® V soft processors, and diverse IO protocols supporting diverse data at the edge coupled with deterministic low latency, low power, and longevity give FPGAs additional advantages at the edge.
GenAI or Custom
The Agilex™ 7 FPGA M-Series can be used for custom transformer-based LLM inference, outperforming GPUs in power and size.
With 32 GB HBM2E offering 820 GBps and up to 512 GB DDR5 at 224 GBps, it's ideal for LLMs and KV caches. High-speed SERDES (116 Gbps), 800 GbE support, and PCIe 5.0 (64 GBps) ensure swift scaling and data transfer. The device's variable precision DSPs support AI inference-friendly formats like FP16, bfloat16, and INT8. Hyperflex™ architecture enables 500MHz+ operation for fast AI inference.
AI Attach
With 800 GbE support, tailor-made solutions can be designed for AI cluster creators with FPGAs acting as FPGA AI NICs to reduce data ingest jitter and network congestion of GPUs during training or inference. Custom and Open standard options for scale in and scale out are supported.
FPGAs are great for AI pre-processing, where data is formatted and filtered to be efficiently used in AI training or inference.
FPGAs can accelerate private enterprise databases for fast data retrieval.
Solution Capability: High-Performance FPGAs
The Agilex™ 7 FPGA M-Series with 89 INT8 TOPS, in-package HBM2e memory capacity (32GB and 820Gbps bandwidth), and hardened DDR5/LPDDR5 memory controller (supporting 5600MBPs) alleviates bottlenecks for memory-bound AI models like Generative AI LLM-based transformer models.
The Agilex™ 5 FPGA devices, with up to 56 INT8 TOPs, feature the first FPGA fabric infused with AI tensor blocks for higher compute density. The tensor blocks can perform 20 BlockFP16 or INT8 multiply-accumulates in one clock cycle, resulting in a 5X increase in compute density over other devices in the Agilex FPGA portfolio.
Developer Usability: Seamless Pre-trained Model Conversion & Auto-optimization for FPGA Resources
FPGA AI Suite offers a push-button conversion of the pre-trained model to AI inference IP using OpenVINO. The suite's Auto-optimizer tool sweeps the design space for optimal implementation of AI models.
Software emulation of the FPGA AI Suite IP is accessible through the OpenVINO plugin interface, enabling quick evaluation of the FPGA AI IP's accuracy without the need for hardware (available for Agilex™ 5 FPGA only).
FPGA AI Suite integrates Quartus® Prime Design Software and Platform Designer to simplify the incorporation of AI inference IP.
Application Agility: Enable Continuous Innovation
Engineers can craft and evolve AI solutions with FPGAs to stay at technology's cutting edge by using the devices1 reprogrammability, extended product lifecycles, and versatile I/O options that enable continuous innovation and adaptation.
FPGAs are inherently suited for AI. The FPGAs thousands of DSP blocks, memory hierarchies, and broad I/O support allow customization of designs and the construction of AI networks from the ground up to achieve optimal performance.
AI models of various sizes can be implemented efficiently—from power-efficient TinyML to medium and large models at the edge and GenAI LLM Transformer models in data centers.
Explore Resources to Get You Started
Intel® FPGA AI Suite
Speed up your FPGA development for AI inference using frameworks such as TensorFlow or PyTorch and OpenVINO toolkit, while leveraging robust and proven FPGA development flows with the Intel Quartus Prime Software.
Learn more
Intel® Distribution of OpenVINO Toolkit
An open-source toolkit that makes it easier to write once and deploy anywhere.
Learn more
Need More Information?
Let us know how we can help with your questions.
Contact us