Microsoft* and Intel collaborate to optimize AI workloads spanning AI PCs running DirectML, Microsoft Azure* Cloud Services, and multiplatform deployment with ONNX* (Open Neural Network Exchange) Runtime. They also collaborate on open source projects to streamline AI training, model optimization, and inference. This includes ONNX model optimization, Microsoft DeepSpeed* on Intel® GPUs and AI accelerators, and Web Neural Network (WebNN) deployment on AI PCs.
Learn more about AI PC and Microsoft Azure offerings.
Intel and Microsoft Case Studies
Nuance: Deliver Imaging AI in the Clinical World (Microsoft Ignite Session Replay)
Intel Accelerates PadChest and fMRI Models on Microsoft Azure Machine Learning
"The Intel team's optimization of fMRI and PadChest models using Intel® Extension for PyTorch* and OpenVINO™ toolkit powered by oneAPI, leading to approximately 6x increase in performance, tailored for medical imaging, showcases best practices that do more than just accelerate running times. These enhancements not only cater to the unique demands of medical image processing but also offer the potential to reduce overall costs and bolster scalability."
— Santamaria-Pang Alberto, principal applied data scientist, Health AI at Microsoft
"We are elated to leverage the power of CPU instances provided by Microsoft Azure machine learning to enable developers and data scientists to take advantage of Intel® AI optimizations powered by Intel hardware. By integrating optimizations such as the Intel® Extension for Scikit-learn* powered by oneAPI into the platform, users can easily accelerate development and deployment of machine learning workloads for faster results and achieve a reduction in resource costs with just a few lines of code."
— Vijay Aski, partner director AI platform, Microsoft
Using Microsoft Models, Tools and Services on Intel® Platforms
Learn how to get started or how to get the most out of Microsoft software and models running on Intel-based platforms spanning data center, cloud, and AI PCs. These joint offerings are based on OpenVINO toolkit, AI Tools, and Intel® Gaudi® software.
Multiplatform
AI PC
Data Center and Cloud
- Azure Machine Learning-Based Federated Learning with Intel® Xeon® Platforms
- Train with scikit-learn on Azure Machine Learning
- Get Started with DeepSpeed on Intel® Gaudi® Accelerators
- DeepSpeed User Guide for Training on Intel Gaudi Accelerators
- Accelerate ONNX Models with oneAPI Deep Neural Network Library (oneDNN) Execution Provider
More Resources
AI Machine Learning Portfolio
Explore all Intel® AI content for developers.
AI Tools
Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Intel® AI Hardware
The Intel portfolio for AI hardware covers everything from data science workstations to data preprocessing, machine learning and deep learning modeling, and deployment in the data center and at the intelligent edge.