Build and Run a Sample Using the Command Line
AI Tools
In this section, you will run a simple "Hello World" project to familiarize yourself with the process of building projects, and then build your own project.
You can use either a terminal window or Visual Studio Code* when working from the command line. For details on how to use VS Code locally, see Basic Usage of Visual Studio Code with oneAPI on Linux*. To use VS Code remotely, see Remote Visual Studio Code Development with oneAPI on Linux*.
Build and Run a Sample Project
Samples must be cloned to your system before you can build the sample project. To build and run a sample, start by cloning the sample, then follow the directions in README.md to build and run the sample. For more samples, browse the full GitHub repository: AI Tools Code Samples.
Sample(s) & Respective AI Tool | Description | Sample Location & More Information |
Intel® Extension for PyTorch* CPU Intel® Extension for PyTorch* GPU (PyTorch* Optimizations from Intel) |
Demos of the advanced features in Intel® Extension for PyTorch*. Please check the features introduction page of Intel® Extension for PyTorch* online document for detail information. |
For the Intel® Extension for PyTorch* CPU sample, first clone the repository using the following command: git clone https://github.com/intel/intel-extension-for-pytorch.git Then activate the environment using the following command: conda activate pytorch Finally, follow the instructions in the README file: Inference notebooks For the Intel® Extension for PyTorch* GPU sample, first clone the repository using the following command: git clone https://github.com/intel/intel-extension-for-pytorch.gitThen activate the environment using the following command: conda activate pytorch Finally, follow the instructions in the README file: |
TensorFlow* Getting Started Sample (TensorFlow* Optimizations from Intel) |
Demonstrates how to train an example neural network and shows how Intel-optimized TensorFlow* enables Intel® oneDNN calls by default. This sample code shows how to get started with TensorFlow*. It implements an example neural network with one convolution layer and one ReLU layer. |
Sample |
Modin* Getting Started Sample (Modin*) |
This Getting Started sample code shows how to use distributed Pandas using the Modin* package. | Sample |
Intel® Python XGBoost* Getting Started Sample (Intel® Optimization for XGBoost*) |
Learn how to use Intel optimizations for XGBoost* published as part of Intel® AI Tools. The sample also illustrates how to set up and train an XGBoost* model on datasets for prediction. |
Sample |
Intel® Python Scikit-learn* Extension Getting Started Sample (Intel® Extension for Scikit-learn*) |
Demonstrates how to use a support vector machine classifier from Intel® Extension for Scikit-learn* for digit recognition problem. Intel® Extension for Scikit-learn* speeds up Scikit-learn* applications. The acceleration is achieved through the use of the Intel® oneAPI Data Analytics Library (oneDAL). |
Sample |
To see a list of components that support CMake, see Use CMake to with oneAPI Applications.
Build Your Own Project
No special modifications to your existing Python projects are required to start using them with these tools. For new projects, the process closely follows the process used for the Getting Started Samples. Refer to the TensorFlow* Getting Started Sample README file for instructions.
Maximizing Performance
You can get documentation to help you maximize performance for either TensorFlow* or PyTorch*.
Activate the AI Tools
. shell source %HOME/intel/oneapi/intelpython/bin/activate
Create your own Environment
- To create an environment:
. shell conda create --name <my-env>
This creates the virtual environment. No packages will be installed in this environment.
- To create an environment with a specific package:
. shell conda create -n myenv intel-extension-for-tensorflow -c https://software.repos.intel.com/python -c conda-forge
Or:
. shell conda create --name myenv conda install -n myenv intel-extension-for-tensorflow -c https://software.repos.intel.com/python -c conda-forge
Using JupyterLab*
- Activate the AI Tools:
. shell source %HOME/intel/oneapi/intelpython/bin/activate
- Run JupyterLab:
. shell jupyter lab --ip 0.0.0.0 --no-browser --allow-root