Get Started with Intel® Gaudi® AI Accelerators
Start running models on Intel® Gaudi® AI accelerators and configure your environment for model training or inference. For more system configuration requirements, see the Installation Guide.
First-Time Users
If you're new to using Intel Gaudi accelerators, visit the quick start guide and watch its related video. These resources walk you through using Intel® Tiber™ Developer Cloud and running workloads from model references for Intel Gaudi accelerators and Hugging Face* models.
Get Started
To start running models on Intel Gaudi accelerators:
- Get access to an Intel Gaudi accelerator instance.
- Load the PyTorch* with Docker* image that contains Intel Gaudi software and the PyTorch framework.
- Select and load a set of models from GitHub* repositories listed in the following section. They include Hugging Face models, model references, or generative AI (GenAI) examples.
Documentation
Learn how to install the Intel Gaudi AI accelerator platform software and Intel Gaudi software stack.
Learn how to run your model on PyTorch.
Distributed Training with PyTorch
Learn how to scale training on your PyTorch model with DistributedDataParallel API support. Learn to scale within and across servers using NICs.