Tech.Decoded Library
Here you’ll find a continuously growing library of knowledge curated to help you get the most out of modern hardware, bolster your competitive edge, and get to market faster.
495 Results
Title
Introducing Intel® Tiber™ AI Cloud, built on the backbone of Intel® Tiber™ Developer Cloud and designed for production-scale AI deployments.
Learn best practices of preparing, analyzing, and running HPC applications on-premise and in the cloud, including parallel code and system resources.
Learn how DP Technology achieved about 45.2% performance improvement using Alibaba's E-HPC Cloud Service and Intel® hardware and oneAPI software.
Find out how Google Cloud Platform* service sped up its HPC workloads using Intel® oneAPI Toolkits and Intel® Xeon® processors.
Learn and practice how to write efficient SYCL code in this advanced, on-demand workshop so you can deploy across CPUs, GPUs, FPGAs, and more.
AI in Production, Bigger Systems, + Services for Cloud Native Workloads
Senior engineers from Megware Computer and Intel discuss what to consider when optimizing HPC clusters in heterogeneous computing environments.
Learn how the DeepSeek-R1 distilled reasoning model performs and see how it works on Intel hardware.
Learn to efficiently write SYCL code for heterogeneous computing, including memory management, data dependencies, and subgroups.
Explore how the AI Kit can streamline machine learning workloads across a hybrid cloud landscape with few (or no) code changes required.
Learn the basics of cross-architecture SYCL and how it fits into the oneAPI programming model for high performance across CPUs, GPUs, and FPGAs.
Learn how to take advantage of Intel®-optimized AI software—ML/DL frameworks and libraries—in popular cloud service provider platforms.
This on-demand workshop unpacks the tools and methods for achieving high-performance, portable SYCL code that can run across different CPUs and GPUs.
A research software engineer from ENCCS and Intel discuss the evolution of SYCL and its widespread use for cross-platform, heterogeneous programming.
Intel® Developer Cloud enables AI companies to build, train, & deploy innovative solutions on a leading-edge, cost-effective AI acceleration platform.
Get an introduction to the Intel® oneAPI Base & HPC Toolkit, the new workhorse product for high-performance, heterogeneous computing performance.
The Intel® Developer Cloud beta is now available. An evolution of the Intel® DevCloud, the new platform offers more pre-launch access to new silicon.
This quick reference matches the built-in accelerators in 4th Gen Intel® Xeon® processors with Intel® oneAPI and AI tools for different workloads.
Get a tour of the Intel® oneAPI Base & HPC Toolkit, including how existing Intel® Parallel Studio XE users can transition to the new toolset.
Roboflow achieved 10x gains in AI inference and 3x in data analytics enhancing computer vision models performance with 4th gen Intel® Xeon® processors.
If your apps rely on complex math routines, this on-demand session is for you. Learn why you want to use the Intel® oneAPI Math Kernel Library.
Learn how a free, open-source development tool helps improve the features and performance of your volume rendering applications on Intel® processors.
An Intel physicist discusses what quantum computing can do for society and how Intel is working on solutions to make that future a reality. Listen.
Learn about the latest open source language initiatives through a panel discussion with Java experts.
Analyze, tune, and optimize AI and HPC applications and solutions to unlock the power of the 3rd Generation Intel® Xeon® Scalable processors.
Java developers and performance engineers: Overcome the challenges of analyzing and optimizing your cloud (public, private, or both) workloads.
A developer’s guide to getting started with Generative AI with Intel AI technologies
Learn about the key functions of Intel® Integrated Performance Primitives, including calling conventions, usage models, and code samples.
Learn how Intel® oneAPI Math Kernel Library (oneMKL) can help you develop performant math-heavy applications, and solve heterogeneous GPU challenges.
Learn how the Intel® VTune™ Profiler Server can help HPC developers profile software on remote systems, especially transient and cloud systems.
Seekr achieved big business and performance gains after moving production workloads from on-prem to Intel® Developer Cloud at a fraction of the cost.
Get practical tips for developing AI applications in the cloud.
A guide on how Intel AI solutions support Falcon 3 models
Explore how an efficient ghost cell exchange mechanism for a domain decomposition code can effectively hide communication latency in MPI applications.
Learn how Intel® Inspector helps you analyze, detect, and debug threading and memory errors early in the design process.
Explore how small form-factor AI PCs can deftly run the Llama 3 70B parameter model locally and at lower cost than a workstation.
Find out how to take advantage of inter-node, intra-node, and accelerator-device-level parallelism using DPC++ and MPI hybrid programming.
Learn how to deploy high-availability machine learning solutions on Amazon Web Services using Intel’s new cloud-optimization module for Kubernetes.
Get the steps for running an open source Stable Diffusion model on Intel® Gaudi® AI accelerators to create your own unique piece of art.
Bolster Visual AI inference, increase highly parallel compute productivity and code quality assurance for all software developers. oneAPI and AI Tools 2025.1 make your work easier and more productive.
Use AI techniques to create and modify art.
Learn a methodology (including the tools) to quickly and efficiently profile systems for in-depth power and thermal behavior.
Get an overview of oneAPI, including what it is, how it can solve heterogeneous programming challenges, and how to test drive it in the Cloud.
Spain-based tech company Codee enabled shift-left performance on its platform by using oneAPI tools to automate source code tasks on CPUs and GPUs.
Get a general blueprint for building an MLOps environment in the cloud to ensure performant model development and deployment.
Find out how AI-based Intel® Open Image Denoise delivers the most accurate 3D, photorealistic images available and reduces final frame render times.
TencentDB for MySQL and Intel built a performant MySQL on Intel® Xeon® Processors, optimized by Intel® oneAPI tools, achieving significant gains.
oneAPI programming model - an alternative to CUDA* vendor lock-in for accelerated parallel computing across HPC, AI, and more on CPUs and GPUs.
This article shows the initial performance results for Llama 3.2 on Intel's AI product portfolio, including Intel® Gaudi® AI accelerators, Intel® Xeon® processors, and AI PCs.
Data science engineers from Intel and Argonne discuss Aurora's large-scale HPC platform and how it's going to leverage AI to do really cool things.
This on-demand workshop focuses on techniques to gain maximum optimization when programming on CPUs, GPUs, and FPGAs using a DPC++ library .
Validate Alibaba Cloud* Qwen2 LLMs with Intel AI solutions from datacenter to client and edge.
This tutorial takes an application from execution on CPU to GPU, and then uses VTune Profiler to analyze and optimize both the app and the system.
Get hands-on practice to achieve performance-portable SYCL code that can run across different CPUs and GPUs on the Intel® Tiber™ Developer Cloud.
Find out how Intel® VTune™ Profiler can help you optimize application performance, whether you are targeting one or multiple architectures.
Learn how to migrate your C/C++ code to SYCL with oneAPI so it not only targets FPGAs, but also GPUs and CPUs with the same code.
Learn how Digital Cortex is leveraging oneAPI to provide unprecedented access to compute across multiple architectures and platforms. Listen [37:07].
In this on-demand webinar, learn how to use Intel® VTune™ Profiler for full-spectrum performance analysis of applications and systems.
Learn how Intel® MPI Library helps you execute parallel code across multiple processors while allowing users to run their apps across multiple nodes.
Be up and running using C++ with SYCL in about 60 minutes, so you can code for multiarchitecture platforms using a single programming language.
Learn how to optimize your supply chain processes using open source data science technologies from Red Hat coupled with Intel-optimized AI software.
Intel Labs used oneAPI tools to accelerate and refine a single-cell RNA sequencing pipeline to run faster and cheaper on Intel® CPUs over Nvidia GPUs.
UC Davis accelerates prompt-driven GenAI for data visualization using Intel® Extension for PyTorch* on Intel® GPUs.
Videos, podcasts, articles, and more on various topics like rendering, AI, and IoT help you improve your code and remove proprietary boundaries.
Get a solid introduction to the Julia programming language and how to use it for real-world scientific computing in this on-demand workshop.
Deepen your understanding of heterogeneous C++ programming so you can more effectively create and deploy multivendor applications.
Get a six-minute introduction to oneAPI, the Intel-led initiative to create a single, open programming model for diverse architectures.
Get an introduction to oneAPI—what it is, what it includes, and why it was created—and discover why you want to adopt it for heterogeneous computing.
Get the steps for profiling workloads on the 4th gen Intel® Xeon® CPU Max Series, the first and only x86-based processor with high-bandwidth memory.
Intel AI solutions support Meta Llama 3.1 launch
In this on-demand workshop, learn the techniques for profiling your application's performance and eliminating bottlenecks that bog down your code.
Two healthcare researchers discuss using the power of Intel hardware and oneAPI software to create HPC apps that help explain biological data. Listen.
Find out how tools in the Intel® AI Analytics Toolkit deliver exceptional performance boosts on Intel® Xeon® processors with almost no code changes.
Architects of compiler technology discuss how these development tools have emerged and what to expect in the future.
Get Spack recipes and guidelines to build critical HPC applications simply and efficiently while achieving optimal performance.
Learn the essential methods for using oneMKL to accelerate math-processing computations and offload application tasks on Intel® GPUs.
Learn how to analyze and tune your software for optimal performance after it is ported to your target, post-release GPU using Intel® VTune™ Profiler.
Learn how to design software for CPU to GPU offload and how to optimize the GPU code using the intuitive user interface of Intel® Advisor.
In this episode of Code Together, learn how the Intel® Liftoff program is helping early-stage startups find their markets and scale their innovation.
This on-demand session focuses on how to use the Intel® DevCloud to optimize your "vision-type" applications for GPUs.
Page / 7