Optimize Transformer Models with Tools from Intel and Hugging Face*
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Transformer models are powerful neural networks that have become the standard for delivering advanced performance for tasks such as natural language processing (NLP), computer vision, and online recommendations. (Fun fact: People use transformers every time they do an internet search on Google* or Microsoft Bing*.)
But there is a challenge: Training these deep learning models at scale requires a large amount of computing power. This can make the process time-consuming, complex, and costly.
This session shares a solution: An end-to-end training and inference optimization for transformers.
Join your hosts from Intel and Hugging Face* (notable for its transformers library) to learn:
- How to do multi-node, distributed CPU fine-tuning for transformers with hyperparameter optimization using the Hugging Face transformers and Accelerate library, and Intel® Extension for PyTorch*.
- How to easily do inference optimization (including model quantization and distillation using Optimum for Intel) with the interface between the transformers library and Intel tools and libraries.
Watch a showcase of transformer performance on the latest Intel® Xeon® Scalable processors.
Skill level: Intermediate
Featured Software
Get the Intel Extension for PyTorch as part of the Intel® AI Analytics Toolkit or as a stand-alone version.
Learn More
- Hugging Face Trainer: An API for hyperparameter search that makes it easier to start training without manually writing a training loop.
- Intel® Disruptor Initiative: Participants are companies that are pushing the limits of innovation.
Accelerate data science and AI pipelines-from preprocessing through machine learning-and provide interoperability for efficient model development.