Get Started

Get Started with the AI Tools for Linux*

ID 766885
Date 12/16/2024
Public

Build and Run a Sample Using the Command Line

AI Tools

In this section, you will run a simple "Hello World" project to familiarize yourself with the process of building projects, and then build your own project.

NOTE:
If you have not already configured your development environment, go to Configure your system then return to this page. If you have already completed the steps to configure your system, continue with the steps below.

You can use either a terminal window or Visual Studio Code* when working from the command line. For details on how to use VS Code locally, see Basic Usage of Visual Studio Code with oneAPI on Linux*. To use VS Code remotely, see Remote Visual Studio Code Development with oneAPI on Linux*.

Build and Run a Sample Project

Samples must be cloned to your system before you can build the sample project. To build and run a sample, start by cloning the sample, then follow the directions in README.md to build and run the sample. For more samples, browse the full GitHub repository: AI Tools Code Samples.

Getting Started Samples for AI Tools

  Component Folder Description
Classical Machine Learning Modin* Modin_GettingStarted Run Modin*-accelerated Pandas functions and note the performance gain.
Modin_Vs_Pandas Compares the performance of Intel® Distribution of Modin* and the performance of Pandas.
Intel® Optimization for XGBoost*​ IntelPython_XGBoost_GettingStarted Set up and trains an XGBoost* model on datasets for prediction.
Scikit-learn* Intel_Extension_For_SKLearn_GettingStarted Speed up a Scikit-learn* application using Intel oneDAL.
IntelPython_daal4py_GettingStarted Batch linear regression using the Python API package daal4py from oneAPI Data Analytics Library (oneDAL).
Deep Learning Intel® Extension for PyTorch* Getting Started with Intel® Extension for PyTorch* A simple training example for Intel® Extension for PyTorch*.

For the Intel® Extension for PyTorch* CPU sample, first clone the repository using the following command: git clone https://github.com/intel/intel-extension-for-pytorch.git

Then activate the environment using the following command: conda activate pytorch

Finally, follow the instructions in the README file: Inference notebooks

For the Intel® Extension for PyTorch* GPU sample, first clone the repository using the following command: git clone https://github.com/intel/intel-extension-for-pytorch.git

Then activate the environment using the following command: conda activate pytorch

Finally, follow the instructions in the README file:

Training notebooks

Inference notebooks

Intel_oneCCL_Bindings_For_PyTorch_GettingStarted Guides users through the process of running a simple PyTorch* distributed workload on both GPU and CPU.
Intel® Neural Compressor (INC) Intel® Neural Compressor (INC) Sample-for-PyTorch Performs INT8 quantization on a Hugging Face BERT model.
Intel® Neural Compressor (INC) Sample-for-Tensorflow Quantizes a FP32 model into INT8 by Intel® Neural Compressor (INC) and compares the performance between FP32 and INT8.
ONNX Runtime* Quickstart Examples for PyTorch*, TensorFlow*, and SciKit Learn* Train a model using your favorite framework, export to ONNX format and inference in any supported ONNX Runtime* language.
Intel® Extension for TensorFlow* IntelTensorFlow_GettingStarted A simple training example for TensorFlow*.
Intel® Extension For TensorFlow GettingStarted Guides users how to run a TensorFlow* inference workload on both GPU and CPU.
JAX* IntelJAX GettingStarted The JAX* Getting Started sample demonstrates how to train a JAX* model and run inference on Intel® hardware.

To see a list of components that support CMake, see Use CMake to with oneAPI Applications.

Build Your Own Project

No special modifications to your existing Python projects are required to start using them with these tools. For new projects, the process closely follows the process used for the Getting Started Samples. Refer to the TensorFlow* Getting Started Sample README file for instructions.

Maximizing Performance

You can get documentation to help you maximize performance for either TensorFlow* or PyTorch*.

Activate the AI Tools

. 
shell
source %HOME/intel/oneapi/intelpython/bin/activate

Create your own Environment

  1. To create an environment:
    . 
    shell
    conda create --name <my-env>
    

    This creates the virtual environment. No packages will be installed in this environment.

  2. To create an environment with a specific package:
    . 
    shell
    conda create -n myenv intel-extension-for-tensorflow -c https://software.repos.intel.com/python -c conda-forge
    

    Or:

    . 
    shell
    conda create --name myenv
    conda install -n myenv intel-extension-for-tensorflow -c https://software.repos.intel.com/python -c conda-forge
    
NOTE:
For more personalized package selection, please visit the AI Tools Selector.

Using JupyterLab*

  1. Activate the AI Tools:
    . 
    shell
    source %HOME/intel/oneapi/intelpython/bin/activate
    
  2. Run JupyterLab:
    . 
    shell
    jupyter lab --ip 0.0.0.0 --no-browser --allow-root