AI Tools Selector Guide

ID 785444
Updated 1/23/2025
Version 2025.0
Public

author-image

By

Use the AI Tools Selector for a seamless installation process of the Intel AI Tools, tailored to your preferred distribution channel and package selection. You have the following options to choose from:

  • Presets
  • Custom selection of packages
  • Offline installer

Before proceeding with the installation of AI Tools, make sure your system meets the necessary System Requirements.

Set Up System Before Installation

Before installing AI tools via conda or pip, you need to set up your system as described below.

conda

Install conda

Set up conda with Miniforge:

  1. Download the appropriate Miniforge Installer for Linux* OS:
    wget -q https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
  1. In your terminal, run:
    sh Miniforge3-Linux-x86_64.sh
  1. Delete the downloaded file:
    rm Miniforge3-Linux-x86_64.sh
  1. (Optional) To speed up execution, use libmamba as the solver, which is the default in the latest conda distribution (miniforge). For older conda installations, use these commands to update conda and set libmamba:
    conda update -n base conda && \
    conda config --set solver libmamba

     

To verify that libmamba is set:

conda config --show

To learn more about miniforge installation, see the Miniforge Repository.

Create Environment

Create and activate a virtual environment. If needed, replace my-env with your preferred environment name:

conda create -n my-env -y
conda activate my-env

pip

Install Intel® oneAPI Base Toolkit (Ubuntu 22.04 and only GPU versions)

Follow the next steps to use the GPU versions for Intel® Extension for TensorFlow * or Intel® Extension for PyTorch * distributed via pip. Otherwise, skip to the Install Python (Ubuntu 22.04) section.

  1. Download the Intel® oneAPI Base Toolkit from the official Intel® repository:
    wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/dfc4a434-838c-4450-a6fe-2fa903b75aa7/intel-oneapi-base-toolkit-2025.0.1.46_offline.sh
    
  1. Install the Intel® oneAPI Base Toolkit using the bash script:
    sh ./intel-oneapi-base-toolkit-2025.0.1.46_offline.sh -a --silent --cli --eula accept
  1. Set the environment variables with the provided script:
    source /opt/intel/oneapi/2025.0/oneapi-vars.sh

Install Python (Ubuntu 22.04)

  1. Install Python from apt, if you are root user, you do not need sudo:
    sudo apt-get update && apt install python3.11
  1. Verify Python is installed:
    which python3.11
  1. Install venv to create a virtual environment:
    apt install python3.11-venv

Create Environment

Create and activate a virtual environment:

python3.11 -m venv my-env
source my-env/bin/activate

Install GPU drivers (Ubuntu 22.04)

If you use Intel GPU, you need to install the GPU drivers separately as described in the Ubuntu 22.04 section of the Intel® software for general purpose GPU capabilities document. Make sure you follow the instructions for LTS version.

Install AI Tools

Presets

The presets represent combinations of AI tools for easy installation and use. The following presets are available for integration into your workflow:

  • Classical Machine Learning: Accelerate your Machine Learning and Data Science pipelines with the power of open libraries optimized for Intel architectures. Enhance the efficiency and speed of your ML tasks.
  • Deep Learning PyTorch* CPU: Boost the performance of your workloads, reduce model size, and improve the speed of your Deep Learning deployments on Intel® Xeon® processors with Intel® Extension for PyTorch*.
  • Deep Learning TensorFlow* CPU: Boost the performance of your workloads, reduce model size, and improve the speed of your Deep Learning deployments on Intel® Xeon® processors with Intel® Extension for TensorFlow*.
  • Deep Learning PyTorch* GPU: Boost the performance of your workloads, reduce model size, and improve the speed of your Deep Learning deployments on Intel® Data Center GPU Max Series with Intel® Extension for PyTorch*.
  • Deep Learning TensorFlow* GPU: Boost the performance of your workloads, reduce model size, and improve the speed of your Deep Learning deployments on Intel® Data Center GPU Max Series with Intel® Extension for TensorFlow*.
  • Deep Learning JAX* CPU: Reduce model size and improve the speed of your Deep Learning deployments on Intel® Xeon® processors with JAX*.

Install a Preset

To use one of the preset packages, follow these steps:

  1. Go to the AI Tools Selector and select the preferred preset package name from the list on the left sidebar.
  2. Choose either conda, pip, or Docker as your preferred distribution type.
  3. Make sure you set up the environment as described on the right panel of the selector under the installation command.
  4. Copy and execute the provided installation command in your Terminal.

Accessing Docker Hub By accessing, downloading, or using this software and any required dependent software (the "Software Package"), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third-party software included with the Software Package. Refer to the licensing information for additional details.

Custom Selection of Packages

For more personalized package selection, you can install individual AI tools using Conda or pip:

  1. Go to the AI Tools Selector and select Customize from the list on the left sidebar.
  2. Choose either conda or pip as your preferred package type.
  3. Click the checkboxes of the required tools, frameworks, SDKs, and/or CLIs.
  4. Make sure you set up the environment as described in Set Up System Before Installation.
  5. Copy and execute the provided installation command in your Terminal.

Offline Installer

The AI Tools offline installer is a set of pre-built conda environments with the Intel optimization packages for Python. It contains the following environments:

Environment Main Component
base Intel® Distribution for Python
modin Modin*
pytorch Intel® Extension For PyTorch*
pytorch-gpu Intel® Extension For PyTorch*
tensorflow Intel® Extension For TensorFlow*
tensorflow-gpu Intel® Extension For TensorFlow*
jax JAX*

NOTE

To use GPU environments, make sure the corresponding GPU drivers are installed as described in Install GPU Drivers.

To install the AI tools via the offline installer:

  1. Go to the AI Tools Selector and select the Offline installer (AI Tools) from the list on the left sidebar.
  2. Follow the download and installation instructions on the right-hand side of the selector.

After installation, configure the system as described in Get Started with the AI Tools.

Verify Installation

Use the commands below to verify that the AI tools are properly installed:

AI Tool Command
Intel® Extension for PyTorch* (CPU) python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__);print(ipex.__version__);"
Intel® Extension for PyTorch* (GPU) python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Intel® Extension for TensorFlow* (CPU) python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
Intel® Extension for TensorFlow* (GPU) python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
JAX* python -c "import jax; print(jax.__version__)"
Intel® Optimization for XGBoost* python -c "import xgboost as xgb; print(xgb.__version__)"
Intel® Extension for Scikit-learn* python -c "from sklearnex import patch_sklearn; patch_sklearn()"
Modin* python -c "import modin; print(modin.__version__)"
Intel® Neural Compressor python -c " import neural_compressor as inc; print(inc.__version__)"
ONNXRuntime python -c "import onnxruntime; print(onnxruntime.__version__)"

Next Steps

After successful installation, refer to the following resources to start using the installed product(s):

Known lssues

2024.2

  • When installing packages via conda, you may get a "ClobberError" or "SafetyError", which occurs when conda detects that a file or directory will be overwritten. This error does not impact the functionality of the installed packages and can be safely ignored.

2024.1

  • For conda installations using version 23.9 or lower, you may get a JSON error:

    "json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
    

    This error occurs because, prior to conda version 23.10.0, the default solver was classical instead of conda-libmamba. To resolve this issue, update conda using one of the following commands:

    conda update conda -c conda-forge --override 

    or

    conda update conda -c conda-forge --update-deps --override