Developer Resources from Intel and Hugging Face*
Scale Hugging Face Transformer Performance with Intel® AI
Intel and Hugging Face* are building powerful AI optimization tools to accelerate transformers for training and inference.
Democratize Machine Learning Acceleration
The companies are collaborating to build state-of-the-art hardware and software acceleration to train, fine-tune, and predict with Hugging Face Transformers and the Optimum* extension. Hardware acceleration is driven by the Intel® Xeon® Scalable processor and the software is accelerated through a rich suite of optimized AI software tools, frameworks, and libraries.
AI Developer Tools
The Intel® AI Portfolio and Intel Xeon Scalable processors can help you achieve the best performance and productivity on your models. Intel AI tools work with Hugging Face platforms for seamless development and deployment of end-to-end machine learning workflows.
AI Tools
Accelerate end-to-end data science and machine learning pipelines using popular tools based on Python* and frameworks optimized for performance and interoperability.
Optimum* for Intel
This interface is a part of the Hugging Face Optimum library. It builds on top of the Intel® Neural Compressor and OpenVINO™ toolkit open source libraries to provide greater model compression and increased inference deployment speed. Use it to apply state-of-the-art optimization techniques such as quantization, pruning, and knowledge distillation for your transformer models with minimal effort.
Documentation
- Accelerate PyTorch* Transformers with Intel Xeon Processors Part 1
- Accelerate PyTorch Transformers with Intel Xeon Processors Part 2
- Distributed CPU Training
- Hyperparameter Search Using Trainer API (SigOpt®)
- Accelerate PyTorch Distributed Fine-Tuning
- Optimum and Intel Neural Compressor
- SetFit: Efficient Few-Shot Learning
- Optimum Documentation
Training
- SetFit 1: Few-Shot Learning in Production
- SetFit 2: Few-Shot Learning with Sentence Transformers
- Optimize Transformer Models with Tools from Intel and Hugging Face
- Accelerate Stable Diffusion* Inference on Intel CPUs
- Fine-Tune Stable Diffusion Models on Intel CPUs
- Optimize Stable Diffusion for Intel CPUs with NNCF and Hugging Face Optimum
- Smaller is Better: Q8-Chat LLM is an Efficient Generative AI Experience on Intel Xeon Processors
- Fine-Tune the Falcon 7-Billion Parameter Model with Hugging Face and oneAPI
- Run Falcon Inference on a CPU with Hugging Face Pipelines
- Hugging Face Reveals Generative AI Performance Gains with Intel Hardware
- Democratize Natural Language Processing on CPUs
Do It Yourself
Intel® Distribution of OpenVINO™ Toolkit
Optimize and deploy high-performance deep learning inference applications from devices to the cloud.
Do It Yourself
Intel® Gaudi® Processor
Intel and Hugging Face (home of Transformer models) have joined forces to make it easier to quickly train high-quality transformer models. Accelerate your Transformer model training on Intel® Gaudi® processors with just a few lines of code. The Hugging Face Optimum open source library combined with the Synapse AI* software suite deliver greater productivity and lower overall costs to data scientists and machine learning engineers.
Customized for deep learning training, Intel Gaudi processors offer efficient results through AI performance and cost-effectiveness. New Amazon EC2* instances that feature these processors deliver up to 40 percent better price performance for training machine learning models than the latest GPU-based Amazon EC2 instances.
Training
Do It Yourself
More Resources
AI Machine Learning Portfolio
Explore all Intel® AI content for developers.
AI Tools
Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Intel® AI Hardware
The Intel portfolio for AI hardware covers everything from data science workstations to data preprocessing, machine learning and deep learning modeling, and deployment in the data center and at the intelligent edge.