AI Frameworks and Tools
Software tools at all levels of the AI stack unlock the full capabilities of your Intel hardware. All Intel AI tools and frameworks are built on the foundation of a standards-based, unified oneAPI programming model that helps you get you the most performance from your end-to-end pipeline on all your available hardware.
Productive, easy -to-use AI tools and suites span multiple stages of the AI pipeline, including data engineering, training, fine-tuning, optimization, inference, and deployment.
AI Tool Selector
Products are grouped to meet common AI workloads like machine learning, deep learning, and inference optimization. You can also customize them to choose only the tools you need from conda*, pip, and Docker* repositories. A full offline installer is also available.
- Optimized frameworks, a model repository, and model optimization for deep learning
- Extensions for scikit-learn* and XGBoost for machine learning
- Accelerated data analytics through Intel contributions to Modin*, a drop-in replacement for pandas
- Optimized core Python* libraries
- Samples for end-to-end workloads
- Perform model compression with a framework-independent API
OpenVINO™ Toolkit
Write Once, Deploy Anywhere
Deploy high-performance inference applications from device to cloud, powered by oneAPI. Optimize, tune, and run comprehensive AI inference using the included optimizer, runtime, and development tools. The toolkit includes:
- Repository of open source, pretrained, and preoptimized models ready for inference
- Model optimizer for your trained model
- Inference engine to run inference and output results on multiple processors, accelerators, and environments with a write-once, deploy-anywhere efficiency
Intel Gaudi Software
Speed up AI Development
Get access to the Habana SynapseAI® development software stack, which supports TensorFlow and PyTorch frameworks.
- Software optimized for Deep Learning training & inference
- Integrates popular frameworks: TensorFlow and PyTorch
- Provides custom graph compiler
- Supports custom kernel development
- Enables ecosystem of software partners
- Habana GitHub & Community Forum
BigDL
Scale your AI models seamlessly to big data clusters with thousands of nodes for distributed training or inference.
Intel® Distribution for Python*
Develop fast, performant Python code with a set of essential scientific and Intel®-optimized computational packages, including NumPy, SciPy*, Numba*, and others.
Intel® AI Reference Models
Access a repository of pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source, machine learning models optimized to run on Intel hardware.
Intel® Tiber™ Portfolio
Build and deploy AI at scale on a managed, high-performance and cost-effective cloud resources—and get to market faster. With Intel’s cloud, develop and optimize AI models and applications, run small- and large-scale training and inference workloads and deploy with best price-performance.
1 Formerly Intel® Developer Cloud
Build, deploy, run, manage, and scale edge and AI solutions on standard hardware with cloud-like simplicity. Built on extensive edge expertise, it’s designed for the most demanding edge use cases and to accelerate edge AI development while reducing costs.
Streamline the AI model lifecycle to create better models for your business and reduce time managing hardware and software. Use the MLOps platform to automate retraining and create more efficient workflows for a greater impact from AI.
2 Formerly cnvrg.io
Open source deep learning frameworks run with high performance across Intel devices through optimizations powered by oneAPI, along with open source contributions by Intel.
PyTorch*
PyTorch* is an AI and machine learning framework based on Python, and is popular for use in both research and production. Intel contributes optimizations to the PyTorch Foundation to accelerate PyTorch on Intel processors. The newest optimizations, as well as usability features, are first released in Intel® Extension for PyTorch* before they are incorporated into open source PyTorch.
TensorFlow*
TensorFlow* is used widely for AI development and deployment. Its primary API is based on Python*, and it also offers APIs for a variety of languages such as C++, JavaScript*, and Java*. Intel collaborates with Google* to optimize TensorFlow for Intel processors. The newest optimizations and features are often released in Intel® Extension for TensorFlow* before they become available in open source TensorFlow.
JAX
JAX is an open source Python library designed for complex numerical computations on high-performance devices like GPUs and TPUs (tensor processing units). It supports NumPy functions and provides automatic differentiation, as well as a composable function transformation system to build and train neural networks. JAX is supported on Intel processors using Intel Extension for TensorFlow.
DeepSpeed
DeepSpeed is an open source, deep learning optimization software suite. It accelerates training and inference of large models by automating parallelism, optimizing communication, managing heterogeneous memory, and model compression. DeepSpeed supports Intel CPUs, Intel GPUs, and Intel® Gaudi® AI accelerators.
PaddlePaddle*
This open source, deep learning Python framework from Baidu* is known for user-friendly, scalable operations. Built using Intel® oneAPI Deep Neural Network Library (oneDNN), this popular framework provides fast performance on Intel Xeon Scalable processors and a large collection of tools to help AI developers.
Classical machine learning algorithms in open source frameworks utilize oneAPI libraries. Intel also offers further optimizations in extensions to these frameworks.
scikit-learn*
scikit-learn* is one of the most widely used Python packages for data science and machine learning. Intel® Extension for Scikit-learn* provides a seamless way to speed up many scikit-learn algorithms on Intel CPUs and GPUs, both single- and multi-node.
XGBoost
XGBoost is an open source, gradient boosting, machine learning library that performs well across a variety of data and problem types. Intel contributes software accelerations powered by oneAPI directly to open source XGBoost, without requiring any code changes.
oneAPI libraries deliver code and performance portability across hardware vendors and accelerator technologies.
Intel® oneAPI Deep Neural Network Library
Deliver optimized neural network building blocks for deep learning applications.
Intel® oneAPI Data Analytics Library
Help speed up big-data analysis by providing highly optimized algorithmic building blocks for all stages of data analytics.
Intel® oneAPI Math Kernel Library
Accelerate math-processing routines, increase science, engineering, and financial application performance, and reduce development time.
Intel® oneAPI Collective Communications Library
Use this scalable, high-performance communication library for deep learning and machine learning workloads.
Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel.