Developer Zone
Topics & Technologies
Featured Software Tools
Intel® Distribution of OpenVINO™ Toolkit
Run AI inferencing, optimize models, and deploy across multiple platforms.
Intel® oneAPI Toolkits
Heterogeneous architecture enables one programming model for all platforms.
Intel® Graphics Performance Analyzers
Identify and troubleshoot performance issues in games using system, trace, and frame analyzers.
Intel® Quartus® Prime Design Software
Design for Intel® FPGAs, SoCs, and complex programmable logic devices (CPLD) from design entry and synthesis to optimization, verification, and simulation.
Get Your Software & Development Products
Try, buy, or download directly from Intel and popular repositories.
Documentation
Get started with these key categories. Explore the complete library.
Explore Our Design Services
Intel® Solutions Marketplace
Engineering services offered include FPGA (RTL) design, FPGA board design, and system architecture design.
In 12 Days, 1 Hour
Webinar: Use Local AI for Efficient LLM Inference
April 16, 2025 | 4:00 PM
Build a large language model (LLM) application using the power of AI PC processing, tapping the native capabilities of Intel® Core™ Ultra processors for running AI locally. The session shows how to develop a Python* back end with a browser extension that compactly summarizes web page content. The exercise showcases the Intel® hardware and software that makes it possible to run LLMs locally.
April 16, 2025, 9:00 a.m. Pacific Daylight Time (PDT)
In 18 Days, 1 Hour
Workshop: Deploy Enterprise-Grade GenAI with OPEA on AWS*
April 22, 2025 | 4:00 PM
Open Platform for Enterprise AI (OPEA) provides the building blocks for enterprise applications, including LLMs, prompt engines, and data stores, based on retrieval augmented generation (RAG) principles. This workshop guides you through the processes of building and deploying enterprise-grade generative AI (GenAI) applications for launching on Amazon Web Services (AWS)*. Explore the capabilities of OPEA to streamline development of a RAG pipeline using structured, repeatable techniques.
April 22, 2025, 9:00 a.m. – 12:00 p.m. Pacific Daylight Time (PDT)
In 19 Days, 1 Hour
Webinar: Power DeepSeek* Models and Applications on Intel® Hardware
April 23, 2025 | 4:00 PM
Run DeepSeek* models on Intel® hardware to experience the advantages of open source freedom, advanced reasoning capabilities, and a lightweight footprint. This webinar uses the vLLM inferencing engine and the ChatQnA application to illustrate the essential qualities of DeepSeek. The session also demonstrates the low cost of running AI on the CPUs of Intel® Xeon® processors and Intel® Gaudi® AI accelerators, a clear alternative to GPUs that is efficient and cost-effective.
Gain familiarity with the Open Platform for Enterprise AI (OPEA), which is used to power the ChatQnA application and provides a useful tool to show the fundamentals of constructing chatbots. Developers viewing the tutorials provided can see how the DeepSeek models can be run on relatively modest hardware and get excited about the potential for running successfully on Intel hardware.
April 23, 2025, 9:00 a.m. Pacific Daylight Time (PDT)