Use Low-Precision Optimizations for High-Performance Deep Learning Inference Applications
Use Low-Precision Optimizations for High-Performance Deep Learning Inference Applications
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
With advances in hardware acceleration and support for low-precision, deep learning inference delivers higher throughput and lower latency. However, data scientists and AI developers often need to make a trade-off between accuracy and performance. There are also the deployment challenges due to high computational complexity of inference quantization. This webinar talks about the techniques and strategies, such as automatic accuracy-driven tuning for post-training quantization and quantization-aware training, to overcome these challenges.
Join us to learn about Intel’s new low-precision optimization tool and how it helped CERN openlab to reduce inference time while maintaining the same level of accuracy on convolutional Generative Adversarial Networks (GAN). The webinar gives insight about how to handle strict precision constraints that are inevitable while applying low-precision computing to generative models.
Sofia Vallecorsa
AI and quantum researcher, CERN openlab
Sofia is an accomplished physicist who specializes in scientific computing with commanding expertise in machine learning and deep learning architectures, frameworks, and methods for distributed training and hyper-parameters optimization. Joining CERN in 2015, she is responsible for several projects in machine learning and deep learning, quantum computing and quantum machine learning, and also supervises master and doctoral thesis students in these fields. Sofia holds a PhD in high-energy physics from University of Geneva.
Feng Tian
Senior deep learning engineer in the Machine Learning Performance team with Intel® architecture, Graphic, and Software (IAGS) group, Intel Corporation
Feng leads development of the Intel® Low Precision Optimization Tool and contributes on Intel®-optimized deep learning frameworks, such as TensorFlow* and PyTorch*. He has 14 years of experience working on software optimization and low-level driver development on Intel architecture platforms.
Accelerate data science and AI pipelines-from preprocessing through machine learning-and provide interoperability for efficient model development.
You May Also Like
Related Webinar