Get Started

Get Started with the Intel® AI Analytics Toolkit for Linux*

ID 766885
Date 7/13/2023
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Configure Your System - Intel® AI Analytics Toolkit

If you have not already installed the AI Analytics Toolkit, refer to Installing the Intel® AI Analytics Toolkit.

To configure your system, set environment variables before continuing.

  All Users Conda Users GPU Users Conda + GPU Users
Set Environment Variables X X X X
Use Conda to Add Packages   X   X
Install Graphics Drivers, Add User to Video Group, and Disable Hangcheck     X X

Set Environment Variables for CLI Development

For working at a Command Line Interface (CLI), the tools in the oneAPI toolkits are configured via environment variables. To set environment variables bysourcing the setvars script:

Option 1: Source setvars.sh once per session

Source setvars.sh every time you open a new terminal window:

You can find the setvars.sh script in the root folder of your oneAPI installation, which is typically /opt/intel/oneapi/ for system wide installations and ~/intel/oneapi/ for private installations.

For system wide installations (requires root or sudo privileges):

. /opt/intel/oneapi/setvars.sh

For private installations:

. ~/intel/oneapi/setvars.sh

Option 2: One time setup for setvars.sh

To have the environment automatically set up for your projects, include the command source <install_dir>/setvars.sh in a startup script where it will be invoked automatically (replace <install_dir> with the path to your oneAPI install location). The default installation locations are /opt/intel/oneapi/ for system wide installations (requires root or sudo privileges) and ~/intel/oneapi/ for private installations.

For example, you can add the source <install_dir>/setvars.sh command to your ~/.bashrc or ~/.bashrc_profile or ~/.profile file. To make the settings permanent for all accounts on your system, create a one-line .sh script in your system's /etc/profile.d folder that sources setvars.sh (for more details, see Ubuntu documentation on Environment Variables).

NOTE:

The setvars.sh script can be managed using a configuration file, which is especially helpful if you need to initialize specific versions of libraries or the compiler, rather than defaulting to the "latest" version. For more details, see Using a Configuration File to Manage Setvars.sh.. If you need to setup the environment in a non-POSIX shell, seeoneAPI Development Environment Setup for more configuration options.

Next Steps

  • If you are not using Conda, or developing for GPU, Build and Run a Sample Project.

  • For Conda users, continue on to the next section.

  • For developing on a GPU, continue on to GPU Users

Conda Environments in this Toolkit

There are multiple conda environments included in the AI Kit. Each environment is described in the table below. Once you have set environment variables to CLI environment as previously instructed, you can then activate different conda environments as needed via the following command:

conda activate <conda environment>
For more information, please explore each environment's related Getting Started Sample linked in the table below.
Conda Environment Name Note Getting Started Sample
tensorflow Intel TensorFlow (CPU) Sample
tensorflow-gpu Intel TensorFlow with Intel Extension for TensorFlow (GPU) Sample
pytorch PyTorch with Intel Extension for PyTorch (XPU) Intel oneCCL Bindings for PyTorch (CPU) Intel Extension for PyTorch Sample,Intel oneCCL Bindings for PyTorch Sample
Pytorch-gpu PyTorch with Intel Extension for PyTorch (XPU) Intel oneCCL Bindings for PyTorch (CPU) Intel Extension for PyTorch Sample,Intel oneCCL Bindings for PyTorch Sample
base Intel Distribution for Python Sample
modin Intel Distribution of Modin Sample
For more samples, browse the full GitHub repository: Intel® oneAPI AI Analytics Toolkit Code Samples.

Use the Conda Clone Function to Add Packages as a Non-Root User

The Intel AI Analytics toolkit is installed in the oneapi folder, which requires root privileges to manage. You may wish to add and maintain new packages using Conda*, but you cannot do so without root access. Or, you may have root access but do not want to enter the root password every time you activate Conda.

To manage your environment without using root access, utilize the Conda clone functionality to clone the packages you need to a folder outside of the /opt/intel/oneapi/ folder:

  1. From the same terminal window where you ran setvars.sh, identify the Conda environments on your system:
    conda env list
    You will see results similar to this:
    # conda environments:
    #
    base                  *  /opt/intel/oneapi/intelpython/latest
    2023.1                   /opt/intel/oneapi/intelpython/latest/envs/2023.0
    pytorch                  /opt/intel/oneapi/intelpython/latest/envs/pytorch
    Pytorch-gpu              /opt/intel/oneapi/intelpython/latest/envs/pytorch-gpu
    tensorflow               /opt/intel/oneapi/intelpython/latest/envs/tensorflow 
    tensorflow-gpu           /opt/intel/oneapi/intelpython/latest/envs/tensorflow-gpu 
    modin                    /opt/intel/oneapi/intelpython/latest/envs/modin
  2. Use the clone function to clone the environment to a new folder. In the example below, the new environment is named usr_intelpython and the environment being cloned is named base (as shown in the image above).
    conda create --name usr_intelpython --clone base
    The clone details will appear:
    (base) -bash.4.3$ conda create --name usr_intelpython --clone base
    Source: /opt/intel/oneapi/intelpython/latest
    Destination: /___/home/.conda/envs/usr_intelpython

    If the command does not execute, you may not have access to the ~/.conda folder. To fix this, delete the .conda folder and execute this command again: conda create --name usr_intelpython --clone base.

  3. Activate the new environment to enable the ability to add packages.
    conda activate usr_intelpython
  4. Verify the new environment is active.

    conda env list

    You can now develop using the Conda environment for Intel Distribution for Python.

  5. To activate the TensorFlow* or PyTorch* environment:

    TensorFlow:

    conda activate tensorflow

    PyTorch:

    conda activate pytorch

Next Steps

GPU Users

For those who are developing on a GPU, follow these steps:

1. Install GPU drivers

If you followed the instructions in the Installation Guide to install GPU Drivers, you may skip this step. If you have not installed the drivers, follow the directions in the Installation Guide.

2. Add User to Video Group

For GPU compute workloads, non-root (normal) users do not typically have access to the GPU device. Make sure to add your normal user(s) to the video group; otherwise, binaries compiled for the GPU device will fail when executed by a normal user. To fix this problem, add the non-root user to the video group:

sudo usermod -a -G video <username>

3. Disable Hangcheck

For applications with long-running GPU compute workloads in native environments, disable hangcheck. This is not recommended for virtualizations or other standard usages of GPU, such as gaming.

A workload that takes more than four seconds for GPU hardware to execute is a long running workload. By default, individual threads that qualify as long-running workloads are considered hung and are terminated. By disabling the hangcheck timeout period, you can avoid this problem.

NOTE:
If the kernel is updated, hangcheck is automatically enabled. Run the procedure below after every kernel update to ensure hangcheck is disabled.

  1. Open a terminal.
  2. Open the grub file in /etc/default.
  3. In the grub file, find the line GRUB_CMDLINE_LINUX_DEFAULT="" .
  4. Enter this text between the quotes (""):
    i915.enable_hangcheck=0
  5. Run this command:
    sudo update-grub
  6. Reboot the system. Hangcheck remains disabled.

Next Step

Now that you have configured your system, proceed to Build and Run a Sample Project.