Accelerate TensorFlow* Model Inference on CPUs with Intel® AI Technology
This training session focuses on:
- Intel® Optimization of TensorFlow* on an Intel® Xeon® platform
- AI model optimization quantification tool: Intel® Neural Compressor
A demo shows the following process:
- Train and get an FP32 TensorFlow model.
- Use the Intel Neural Compressor to quantize and optimize the FP32 model to get an INT8 model.
- Test and compare the performance improvement and accuracy loss of FP32 and INT8 models on an Intel Xeon platform with Intel® Deep Learning Boost technology in the Intel® Developer Cloud.
Speaker
Zhang (Neo) Jianyu is a senior software engineer of Intel® AI software solutions. He focuses on AI solutions and performance optimization on Intel® platforms (CPUs and GPUs). He has a master's degree in pattern recognition and AI from Northwestern Polytechnical University. Zhang has experience in AI, virtualization, communication, and embedded software development.
1
Product and Performance Information
1
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.