Evolving Perspectives on Operational AI: MLOps with Full Stack Optimizations
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Operational AI describes using models and algorithms to integrate AI into day-to-day customer experiences and business processes.
This session explores the concept generally, plus dives into a methodology for achieving it—combining MLOps† components and AI optimizations to deploy performant and scalable AI solutions in production while ensuring that individual components of an AI system are optimized across the stack.
Key learnings:
- Practical examples of implementing MLOps components—such as model registries, data versioning, and monitoring—with AI tools such as model compression and Intel®-optimized AI frameworks
- How implementation of software and hardware optimizations can help optimize critical parts of the machine learning lifecycle
- How to use the combo of MLOps and AI optimization to maximize return on investment (ROI) and AI system quality
This session takes advantage of the latest Intel® hardware and software available in the Intel® Developer Cloud.
Skill level: Intermediate
Featured Software
Get the following tools as a stand-alone component or as part of the Intel® AI Analytics Toolkit.
Intel® Extension for Scikit-learn*
Download Code Samples
Fine-Tuning Text Classification Model with Intel Neural Compressor
Get Started with Intel Extension for PyTorch
†MLOps (machine learning operations) productionizes the operational rigor of the machine learning lifecycle.
Accelerate data science and AI pipelines-from preprocessing through machine learning-and provide interoperability for efficient model development.