Developer Zone
Topics & Technologies
Featured Software Tools
Intel® Distribution of OpenVINO™ Toolkit
Run AI inferencing, optimize models, and deploy across multiple platforms.
Intel® oneAPI Toolkits
Heterogeneous architecture enables one programming model for all platforms.
Intel® Graphics Performance Analyzers
Identify and troubleshoot performance issues in games using system, trace, and frame analyzers.
Intel® Quartus® Prime Design Software
Design for Intel® FPGAs, SoCs, and complex programmable logic devices (CPLD) from design entry and synthesis to optimization, verification, and simulation.
Get Your Software & Development Products
Try, buy, or download directly from Intel and popular repositories.
Documentation
Get started with these key categories. Explore the complete library.
Explore Our Design Services
Intel® Solutions Marketplace
Engineering services offered include FPGA (RTL) design, FPGA board design, and system architecture design.
In 4 Days, 12 Hours
KubeCon + CloudNativeCon Europe
April 1, 2025 | 8:00 AM
The Cloud Native Computing Foundation* flagship conference gathers adopters and technologists from leading open source and cloud-native communities. KubeCon + CloudNativeCon is the premier vendor-neutral cloud-native event that brings together the industry’s most respected experts and key maintainers behind the most popular projects in the cloud-native ecosystem.
April 1–4, 2025; London, UK
In 19 Days, 20 Hours
Webinar: Use Local AI for Efficient LLM Inference
April 16, 2025 | 4:00 PM
Build a large language model (LLM) application using the power of AI PC processing, tapping the native capabilities of Intel® Core™ Ultra processors for running AI locally. The session shows how to develop a Python* back end with a browser extension that compactly summarizes web page content. The exercise showcases the Intel® hardware and software that makes it possible to run LLMs locally.
April 16, 2025, 9:00 a.m. Pacific Daylight Time (PDT)
In 25 Days, 20 Hours
Workshop: Deploy Enterprise-Grade GenAI with OPEA on AWS*
April 22, 2025 | 4:00 PM
Open Platform for Enterprise AI (OPEA) provides the building blocks for enterprise applications, including LLMs, prompt engines, and data stores, based on retrieval augmented generation (RAG) principles. This workshop guides you through the processes of building and deploying enterprise-grade generative AI (GenAI) applications for launching on Amazon Web Services (AWS)*. Explore the capabilities of OPEA to streamline development of a RAG pipeline using structured, repeatable techniques.
April 22, 2025, 9:00 a.m. – 12:00 p.m. Pacific Daylight Time (PDT)