Configure Your System - AI Tools
If you have not already installed the AI Tools, refer to Installing the AI Tools.
Activate AI Tools Base Environment
Linux
Open a terminal window and type the following:
If the default path is used during the installation:
source $HOME/intel/oneapi/intelpython/bin/activate
If a non-default path is used:
source <custom_path>/bin/activate
Verify that conda is installed and running on your system, and list environments, by typing:
conda --version
conda env list
Intel® AI Reference Models folder will be located in $HOME/intel/oneapi/ai_reference_models.
If a custom path was used, Intel® AI Reference Models will be installed one level below: <custom_path>/..
Next Steps
For Conda users, continue on to the next section.
For developing on a GPU, continue on to GPU Users
Conda Environments in the AI Tools
The following conda environments are included in the AI Tools.
Conda Environment Name | AI Tool |
tensorflow | Intel® Extension for TensorFlow* (CPU) Intel® Neural Compressor ONNX* Runtime* |
tensorflow-gpu | Intel® Extension for TensorFlow* (GPU) Intel® Neural Compressor* Intel® Optimization for Horovod* |
pytorch | Intel® Extension for PyTorch* (CPU) Intel® Neural Compressor ONNX Runtime* |
pytorch-gpu | Intel® Extension for PyTorch* (GPU) Intel® Neural Compressor* Intel® oneCCL Bindings for PyTorch* |
modin | Modin* oneMKL |
jax | JAX* oneMKL |
base | Intel® Optimization for XGBoost* Intel® Extension for Scikit-learn* |
- From the same terminal window where the AI Tools Base Environment was activated, identify the Conda environments on your system:
You will see results similar to this:conda env list
# conda environments: # base * $HOME/intel/oneapi/intelpython/ pytorch $HOME/intel/oneapi/intelpython/envs/pytorch Pytorch-gpu $HOME/intel/oneapi/intelpython/envs/pytorch-gpu tensorflow $HOME/intel/oneapi/intelpython/envs/tensorflow tensorflow-gpu $HOME/intel/oneapi/intelpython/envs/tensorflow-gpu modin $HOME/intel/oneapi/intelpython/envs/modin jax $HOME/intel/oneapi/intelpython/envs/jax
- Additional environments can be activated with:
For example, to activate the TensorFlow* or PyTorch* environment:conda activate <environment>
TensorFlow:
conda activate tensorflow
PyTorch:
conda activate pytorch
Verify the new environment is active. An asterisk will be displayed next to the active environment.
conda env list
Additionally, the components installed on the active environment can be listed with:
conda list
GPU Users
For those who are developing on a GPU, follow these steps:
1. Install GPU drivers
If you followed the instructions in the Installation Guide to install GPU Drivers, you may skip this step. If you have not installed the drivers, follow the directions in the Installation Guide.
2. Add User to Video Group
For GPU compute workloads, non-root (normal) users do not typically have access to the GPU device. Make sure to add your normal user(s) to the video group; otherwise, binaries compiled for the GPU device will fail when executed by a normal user. To fix this problem, add the non-root user to the video group:
sudo usermod -a -G video <username>
3. Disable Hangcheck
For applications with long-running GPU compute workloads in native environments, disable hangcheck. Disabling hangcheck is not recommended for virtualizations or other standard usages of GPU, such as gaming.
A workload that takes more than four seconds for GPU hardware to execute is a long running workload. By default, individual threads that qualify as long-running workloads are considered hung and are terminated. By disabling the hangcheck timeout period, you can avoid this problem.
- Open a terminal.
- Open the grub file in /etc/default.
- In the grub file, find the line GRUB_CMDLINE_LINUX_DEFAULT="" .
- Enter this text between the quotes (""):
i915.enable_hangcheck=0
- Run this command:
sudo update-grub
- Reboot the system. Hangcheck remains disabled.
Uninstalling AI Tools
To uninstall the AI Tools, follow the steps below:
- Revert Conda changes by using the following command:
conda init --reverse -–all
Use the --dry-run flag if you want to check what will be reverted before executing the command.
- Remove the installation directory.
If the default path was used during the installation:
rm -rf ${HOME}/intel
If a non-default path was used:
rm -rf <custom_path>
Remove .sh file.
rm -rf l_AITools.2024.1.0.9.sh
NOTE:For this command, you will need to customize the .sh file name, as this file name will be different depending on the package version. The above command uses 2024.1 as an example.
Now that you have configured your system, proceed to Build and Run a Sample Project.