Introducing a New Tool for Neural Network Profiling & Inference Experiments
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
If you use the Intel® Distribution of OpenVINO™ toolkit (even if you don’t yet), the latest release introduces a new profiler tool to more easily run and optimize deep learning models.
Deep Learning Workbench is a production-ready tool that enables developers to visualize key performance metrics such as latency, throughput, and performance counters for neural network topologies and their layers. It also streamlines configuration for inference experiments including int8 calibration, accuracy check, and automatic detection of optimal performance settings.
Join senior software engineer Shubha Ramani for an overview and demonstration of Deep Learning Workbench, where she covers:
- How to download, install, and get started with the tool
- Its new features, including model analysis, int8 and Winograd optimizations, accuracy, and benchmark data
- How to run experiments with key parameters such as batch size, parallel streams, and more to determine the most optimal configuration for your application.
Get the Software
Download the latest version of the Intel® Distribution of OpenVINO™ toolkit.
Shubha Ramani
Senior software engineer, Intel Corporation
Shubha's specialties span all facets of deep learning and AI. In her current role, she focuses on the Intel Distribution of OpenVINO toolkit, including helping customers use its full capabilities and build complex deep learning prototypes. Additionally, she helps customers embrace world-class automotive driving SDKs and tools from Intel, and develops complex, real-world C++ samples using the Autonomous Driving Library for inclusion in automated driving solutions.
Shubha holds an master of science in electrical engineering degree in embedded systems software from the University of Colorado at Boulder, and a bachelor of science in electrical engineering degree from Texas A&M University in College Station.
Optimize models trained using popular frameworks like TensorFlow*, PyTorch*, and Caffe*, and deploy across a mix of Intel® hardware and environments.