Accelerate AI Inference without Sacrificing Accuracy
Accelerate AI Inference without Sacrificing Accuracy
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
AI inference can often be a slow, memory-crushing process due to the need for precision coupled with model computational complexity.
This session looks at a way to solve these issues using quantization: the process of converting data in FP32 to a smaller precision (like int8) while maintaining accuracy and performance and saving memory bandwidth.
AI software engineers Neo Zhang and Severine Habert introduce the tools and techniques to quantize your AI models easily and quickly, including:
- An overview of Intel® Neural Compressor and Intel® Deep Learning Boost
- A demonstration showcasing an end-to-end pipeline to train a TensorFlow* model with a small Keras* dataset, followed by speeding it up using quantization
- Performance comparisons of FP32 and int8 models by the same script
Get the Software
The Intel Neural Compressor is available as part of the AI Tools—eight tools and frameworks to accelerate end-to-end data science and analytics pipelines.
Accelerate data science and AI pipelines-from preprocessing through machine learning-and provide interoperability for efficient model development.
You May Also Like
Related Article