Get Started with the AI Tools
The following instructions assume you have installed the AI Tools software. Please see the AI Tools page for installation options.
Follow these steps to build and run a sample with the AI Tools:
NOTE:
AI tools components have been validated for compatibility with Python 3.11.
No special modifications to your existing projects are required to start using them with these tools.
Components
The AI Tools include:
- Intel® Distribution for Python*: Achieve near-native code performance with this set of essential packages optimized for high-performance numerical and scientific computing.
- Intel® Extension for PyTorch* (CPU & GPU): Intel® Extension for PyTorch* extends PyTorch* with the latest performance optimizations for Intel hardware.
- Intel® Extension for TensorFlow* (CPU & GPU): A heterogeneous, high performance deep learning extension plugin that allows users to flexibly plug an XPU into TensorFlow* on-demand, exposing the computing power inside Intel’s hardware.
- JAX*: A Python library designed for high performance array computation.
- Intel® Optimization for XGBoost*: This well-known machine-learning package for gradient-boosted decision trees includes seamless, drop-in acceleration for Intel® architectures to significantly speed up model training and improve accuracy for better predictions.
- Intel® Extension for Scikit-learn*: A seamless way to speed up your Scikit-learn application using the Intel® oneAPI Data Analytics Library (oneDAL).
Patching scikit-learn makes it a well-suited machine learning framework for dealing with real-life problems.
- Modin*: Enables you to seamlessly scale preprocessing across multi nodes using this intelligent, distributed dataframe library with an identical API to pandas.
- Intel® Neural Compressor: Reduce model size and speed up inference for deployment on CPUs or GPUs. The open source library provides a framework-independent API to perform model compression techniques such as quantization, pruning, and knowledge distillation.
- ONNX Runtime*: AI engine for faster inferencing and training.
- OpenVINO™ Toolkit: Convert and optimize models trained using popular frameworks like TensorFlow and PyTorch. Optimize and deploy with best-in-class performance across a mix of Intel CPUs, GPUs (integrated or discrete), NPUs, or FPGAs.
- Intel® Gaudi® Software: Efficiently map models developed using PyTorch and TensorFlow onto Intel Gaudi AI accelerators. The software suite includes a graph compiler and runtime, a Tensor Processor Core (TPC)* kernel library, firmware and drivers, and developer tools for custom kernel development and profiling.