Maximize the Power of Intel® Gaudi® 2 AI Accelerators for Generative AI and LLMs

As the world of AI continues to push boundaries, the Intel® Gaudi® 2 AI accelerator has emerged as a game-changing solution. In this webinar, our team of experts guides you through the steps to get started using these accelerators to supercharge your applications based on large language models (LLM) with performance, productivity, and efficiency.

Browse upcoming events and watch past recordings.

author-image

By

Key Topics Covered

  • Introduction: Gain a comprehensive understanding of the Intel Gaudi architecture and learn how to take advantage of our large catalog of optimized models.
  • Model Migration: See how to quickly migrate models from other platforms with the addition of only a few lines of code.
  • Accelerate LLM Training: Discover how the Intel Gaudi 2 AI accelerator uses models based on DeepSpeed and Hugging Face* to accelerate LLM training, including fine-tuning a model from the Hugging Face Optimum for Intel Gaudi repository.
  • High-Performance Inference: Learn how this accelerator can deliver lightning-fast, real-time generative AI and LLM inference results.


Whether you’re a data scientist, AI researcher, developer, or technology enthusiast, this webinar will equip you with the knowledge and insights necessary to tap into the benefits of the Intel Gaudi 2 AI accelerators and empower your LLM-based projects with unparalleled speed and efficiency.

Featured Speakers

image of Greg Serochi

Greg Serochi

Developer advocate and applications engineer

 

 image of Shiv Kaul

Shiv Kaul

Applications engineer