All packages are for Linux* only. For compatibility details, refer to the System Requirements.
AI Frameworks and Tools
Software tools at all levels of the AI stack unlock the full capabilities of your Intel® hardware. All Intel® AI tools and frameworks are built on the foundation of a standards-based, unified oneAPI programming model that helps you get the most performance from your end-to-end pipeline on all your available hardware.
Docker containers are available only for preset bundles. To download additional components, choose a package manager and select your component.
All AI Tools are available for offline installation using a stand-alone conda*-based installer. Choose this option if your target installation environments are behind a firewall, you need to manage versions, or for other purposes.
Download
Register your download to receive product updates plus hand-curated technical articles, tutorials, and training opportunities to help you optimize your code.
Thank You
Your Download should start immediately.
Failed to submit your form.
Due to a technical difficulty, we were unable to submit the form. Please try again after a few minutes. We apologize for the inconvenience.
1. Download and Launch the Installer
a. (Optional) Download the installer package. If you have not yet downloaded the package via the link above, use the following command:
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/f27e9e0e-ec27-4024-a4bf-b30c48c99564/l_AITools.2024.2.0.156.sh
b. Launch the command line installer:
sh l_AITools.2024.2.0.156.sh
Accept the license terms and enter the installation path.
2. Set Up Your Environment
a. Activate the AI Tools base environment:
source $HOME/intel/oneapi/intelpython/bin/activate
For detailed instructions, refer to Get Started with the AI Tools.
b. (For GPU users) Set up your system for GPU development as described in Get Started with the AI Tools: GPU Users.
3. Run a Get Started Sample
After a successfull installation, to start using the installed product, see Build and Run a Sample Project.
Uncover valuable insights about your business and customers using libraries and tools optimized for Intel® architectures. Make informed, data-driven decisions with enhanced performance.
Accelerate your machine learning and data science pipelines with the power of open libraries optimized for Intel® architectures. Enhance the efficiency and speed of your machine learning tasks. Intel® Optimization for XGBoost* and Intel® Extension for Scikit-learn* can also be obtained through Intel® Distribution for Python*, which includes optimizations to additional packages to make Python applications more efficient.
Boost the performance of your single node and distributed deep learning workloads on Intel hardware with Intel® Extension for TensorFlow* and PyTorch*.
Reduce model size and improve the speed of your deep learning inference deployments on Intel hardware.
This open source toolkit enables you to optimize a deep learning model from almost any framework and deploy it with best-in-class performance on a range of Intel processors and other hardware platforms.
A separate download is required.
Intel® Gaudi® Software
The Intel® Gaudi® AI accelerator is designed to maximize training throughput and efficiency, while providing developers with optimized software and tools that scale to many workloads and systems. Intel® Gaudi® software was developed with the end user in mind, providing versatility and ease of programming to address the unique needs of users’ proprietary models, while allowing for a simple and seamless transition of their existing models over to Intel® Gaudi® technology. The Intel Gaudi software enables efficient mapping of neural network topologies onto Intel Gaudi technology.
The software suite includes a graph compiler and runtime, Tensor Processor Core (TPC)* kernel library, firmware and drivers, and developer tools. Intel Gaudi software is integrated with PyTorch*, and supports DeepSpeed* for large language models (LLM) and performance-optimized Hugging Face* models for transformer and diffusion uses.
The links below will allow you to access the PyTorch for Docker* containers that contain the full Intel Gaudi software and PyTorch framework. It is recommended to use the Docker images when running models. Use the installation guide to learn how to run the Docker images or perform a manual installation on a bare metal system. Refer to the Support Matrix to see the latest versions of external software and drivers that were used in the Intel Gaudi software release.
Intel, the Intel logo, and Intel Gaudi technology are trademarks of Intel Corporation or its subsidiaries.
Customize your tool selections for conda and pip. Docker containers are not available for customizations.
a. Create and activate a new conda environment, replacing <your-env-name>
with your preferred name for the environment:
conda create -n <your-env-name>
conda activate <your-env-name>
b. (Optional) To speed up running, use libmamba as the solver, which is the default in the latest conda distribution. For older conda installations, use this command to set libmamba: conda config --set solver libmamba
For a detailed walk-through on setting up conda, see Set Up System Before Installation: conda.
Perform the following steps for CPU and GPU installation. GPU installations have one additional steps.
a. Create and activate a new conda environment, replacing <your-env-name>
with your preferred name for the environment:
conda create -n <your-env-name>
conda activate <your-env-name>
b. (Optional) To speed up running, use libmamba as the solver, which is the default in the latest conda distribution. For older conda installations, use this command to set libmamba: conda config --set solver libmamba
For a detailed walk-through on setting up conda, see Set Up System Before Installation: conda.
For GPU optimizations, if selected, perform the following additional steps:
c. Install the latest GPU drivers separately as described in Intel® Software for General Purpose GPU Capabilities.
2. Install with conda*
Install with conda*
If applicable, disregard a “ClobberError” message associated with installation paths. This error does not impact the functionality of the installed packages.
To verify that the AI Tools are properly installed, use the following commands:
Intel® Extension for PyTorch* (CPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__);"
Intel® Extension for PyTorch* (GPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Intel® Extension for TensorFlow* (CPU): python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
Intel® Extension for TensorFlow* (GPU): python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
Intel® Optimization for XGBoost*: python -c "import xgboost as xgb; print(xgb.__version__)"
Intel® Extension for Scikit-learn*: python -c "from sklearnex import patch_sklearn; patch_sklearn()"
Modin*: python -c "import modin; print(modin.__version__)"
Intel® Neural Compressor: python -c "import neural_compressor as inc; print(inc.__version__)"
After a successful installation, to start using the installed product, see Get Started Samples for AI Tools.
If applicable, disregard a “ClobberError” message associated with installation paths. This error does not impact the functionality of the installed packages.
To verify that the AI Tools are properly installed, use the following commands:
Intel® Extension for PyTorch* (CPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__);"
Intel® Extension for PyTorch* (GPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Intel® Extension for TensorFlow* (CPU): python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
Intel® Extension for TensorFlow* (GPU): python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
Intel® Optimization for XGBoost*: python -c "import xgboost as xgb; print(xgb.__version__)"
Intel® Extension for Scikit-learn*: python -c "from sklearnex import patch_sklearn; patch_sklearn()"
Modin*: python -c "import modin; print(modin.__version__)"
Intel® Neural Compressor: python -c "import neural_compressor as inc; print(inc.__version__)"
After a successful installation, to start using the installed product, see Get Started Samples for AI Tools.
a. Prerequisite: If you do not have pip, use the following instructions to install it. After installation, make sure that you can run pip from the command line. Installation Instructions
b. Create and activate a virtual environment, replacing <your-env-name>
with your preferred name for the environment:
python3.10 -m venv <your-env-name>
source <your-env-name>/bin/activate
To install Python and define your environment, see Set Up System Before Installation: pip.
Perform the following steps for CPU and GPU installation. GPU installations have two additional steps.
a. Prerequisite: If you do not have pip, use the following instructions to install it. After installation, make sure that you can run pip from the command line. Installation Instructions
b. Create and activate a virtual environment, replacing <your-env-name>
with your preferred name for the environment:
python3.10 -m venv <your-env-name>
source <your-env-name>/bin/activate
To install Python and define your environment, see Set Up System Before Installation: pip.
For GPU optimizations, if selected, perform the following additional steps:
c. Install the 2024.2 version of Intel® oneAPI Base Toolkit for Linux*.
d. Install the latest GPU drivers separately as described in Intel® Software for General Purpose GPU Capabilities.
a. Prerequisite: If you do not have pip, use the following instructions to install it. After installation, make sure that you can run pip from the command line. Installation Instructions
b. Create and activate a virtual environment, replacing <your-env-name>
with your preferred name for the environment:
python3.10 -m venv <your-env-name>
source <your-env-name>/bin/activate
To install Python and define your environment, see Set Up System Before Installation: pip.
Docker* Containers
Before running the containers, install Docker as described in the Docker Installation Instructions.
2. Install a Docker Container
2. Install with pip
Install with pip
To run a preset container and a get-started sample, follow the instructions in Intel AI Tools Selector Preset Containers. There are several options to run a container on a CPU or GPU.
To verify that the AI Tools are properly installed, use the following commands:
Intel® Extension for PyTorch* (CPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__);"
Intel® Extension for PyTorch* (GPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Intel® Extension for TensorFlow* (CPU): python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
Intel® Extension for TensorFlow* (GPU): python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
Intel® Optimization for XGBoost*: python -c "import xgboost as xgb; print(xgb.__version__)"
Intel® Extension for Scikit-learn*: python -c "from sklearnex import patch_sklearn; patch_sklearn()"
Modin*: python -c "import modin; print(modin.__version__)"
Intel® Neural Compressor: python -c "import neural_compressor as inc; print(inc.__version__)"
After a successful installation, to start using the installed product, see Get Started Samples for AI Tools.
To verify that the AI Tools are properly installed, use the following commands:
Intel® Extension for PyTorch* (CPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__);"
Intel® Extension for PyTorch* (GPU): python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Intel® Extension for TensorFlow* (CPU): python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
Intel® Extension for TensorFlow* (GPU): python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
Intel® Optimization for XGBoost*: python -c "import xgboost as xgb; print(xgb.__version__)"
Intel® Extension for Scikit-learn*: python -c "from sklearnex import patch_sklearn; patch_sklearn()"
Modin*: python -c "import modin; print(modin.__version__)"
Intel® Neural Compressor: python -c "import neural_compressor as inc; print(inc.__version__)"
After a successful installation, to start using the installed product, see Get Started Samples for AI Tools.
cnvrg.io™
cnvrg.io™ is a full-service machine learning operating system. The platform enables you to manage all your AI projects from one place. Using cnvrg.io requires a separate license and download.
Note cnvrg.io is only available for pip packages.
Next Steps
Intel® AI Reference Models (formerly Model Zoo) repository contains links to pretrained models, sample scripts, best practices, and tutorials for many popular open source machine learning models optimized by Intel.
Working with Preset Containers document provides more information about preset containers and instructions on how to run them.
Additional Resources
Thank you for signing up!
Register
Register your product to receive updates plus hand-curated technical articles, tutorials, and training opportunities to help optimize your code.
Versions
Modin* (version 0.26.1), Intel® Extension for TensorFlow* - GPU (version 2.15), Intel® Extension for TensorFlow* - CPU (version 2.15), Intel® Extension for PyTorch* - CPU (version 2.2.0), Intel® Extension for PyTorch* - GPU (version 2.1.0), Intel® Optimization for XGBoost* (version 2.0.3), Intel® Extension for Scikit-learn* (version 2024.1.0), and Intel® Neural Compressor (version 2.4.1) have been updated to include functional and security updates. Users should update to the latest versions as they become available.
Docker License Information
By accessing, downloading, or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third-party software included with the Software Package. Preset containers are published under Apache License 2.0.
Support
Start-up support is available if there is an issue with the AI Tools Selector functionality.
Feedback Welcome
Share your thoughts about the preview version of this AI tools selector, and provide suggestions for improvement. Responses are anonymous, and Intel will not contact you unless you grant permission by providing an email address.