Developer Zone
Topics & Technologies
Featured Software Tools
Intel® Distribution of OpenVINO™ Toolkit
Run AI inferencing, optimize models, and deploy across multiple platforms.
Intel® oneAPI Toolkits
Heterogeneous architecture enables one programming model for all platforms.
Intel® Graphics Performance Analyzers
Identify and troubleshoot performance issues in games using system, trace, and frame analyzers.
Intel® Quartus® Prime Design Software
Design for Intel® FPGAs, SoCs, and complex programmable logic devices (CPLD) from design entry and synthesis to optimization, verification, and simulation.
Get Your Software & Development Products
Try, buy, or download directly from Intel and popular repositories.
Documentation
Get started with these key categories. Explore the complete library.
Explore Our Design Services
Intel® Solutions Marketplace
Engineering services offered include FPGA (RTL) design, FPGA board design, and system architecture design.
In 5 Days, 23 Hours
Webinar: Enable High-Performance Running of GNNs on Intel® NPUs
June 11, 2025 | 4:00 PM
AI PCs from Intel provide an ideal platform for graph neural network (GNN) workloads with powerful acceleration from built-in neural processing units (NPU). GNNs are crucial for tasks like retrieval augmented generation (RAG) in large language models (LLM) and event-based vision tasks. However, running GNNs involves irregular memory access and control-heavy computations, leading to high inference latency. Discover how GraNNite, a hardware-aware framework developed by Intel, optimizes running GNNs on Intel® NPUs for unparalleled performance and efficiency.
June 11, 2025, 9:00 a.m. Pacific Daylight Time (PDT)
In 5 Days, 23 Hours
Webinar: Enable High-Performance Execution of Graph Neural Networks on Intel® NPUs
June 11, 2025 | 4:00 PM
Intel® AI PCs provide an ideal platform for graph neural network (GNN) workloads with powerful acceleration from built-in neural processing units (NPUs). GNNs are crucial for tasks like retrieval-augmented generation (RAG) in LLMs and event-based vision tasks. However, running GNNs involves irregular memory access and control of heavy computations, leading to high inference latency. Discover how GraNNite, a hardware-aware framework developed by Intel, optimizes GNN execution on Intel NPUs for unparalleled performance and efficiency.
June 11, 2025, 9:00 a.m. Pacific Daylight Time (PDT)
In 6 Days, 23 Hours
Workshop: Deploy Agentic AI Applications with OPEA
June 12, 2025 | 4:00 PM
Enhance your expertise in designing and deploying multi-agent QA systems across the cloud with development on the Intel® Tiber™ AI Cloud. This comprehensive workshop delves into techniques employing technologies that include Open Platform for Enterprise AI (OPEA) agentic AI architecture, Intel® Gaudi® AI accelerators, Intel® Xeon® processors, and NVIDIA* GPUs. Get a blueprint approach complemented by live coding and step-by-step implementations that enable participants to construct a production-ready multi-agent QA system tailored to enterprise deployment.
June 12, 2025, 9:00 a.m.–12:00 p.m. Pacific Daylight Time