OpenVINO™ toolkit: An open source AI toolkit that makes it easier to write once, deploy anywhere.
What's New in Version 2024.6
The OpenVINO™ toolkit 2024.6 release enhances generative AI (GenAI) accessibility with improved large language model (LLM) performance and expanded model coverage. It also boosts portability and performance for deployment anywhere: at the edge, in the cloud, or locally.
- Includes updates for enhanced stability and improved LLM performance.
- Support for the latest Intel® Arc™ B-series graphics (formerly code named Battlemage).
- Memory optimizations for improved inference time.
- Improved LLM performance with GenAI API optimizations.
- Noteworthy notebooks added with OpenVINO toolkit: Visual-language assistant with GLM-Edge-V, Local AI, and multimodal understanding and generation with Janus.
Features from 2024.5
Easier Model Access and Conversion
Product |
Details |
---|---|
New Model Support |
|
Generative AI and LLM Enhancements
Expanded model support and accelerated inference.
Feature |
Details |
---|---|
New Jupyter* Notebooks |
Noteworthy notebooks added: Sam2, Llama3.2, Llama3.2 - Vision, Wav2Lip, Whisper, and Llava. |
Intel® GPUs GPU Optimizations |
|
NNCF Updates |
The Neural Network Compression Framework (NNCF) implements a new method for generating synthetic text data. This allows LLMs to be compressed more accurately using data-aware methods without datasets. |
More Portability and Performance
Develop once, deploy anywhere. OpenVINO toolkit enables developers to run AI at the edge, in the cloud, or locally.
Product |
Details |
---|---|
Intel® Hardware Support |
Support for Intel® Xeon® 6 processors with P-cores (formerly codenamed Granite Rapids) and Intel Core Ultra 200V processor family (formerly codenamed Arrow Lake-S) |
GenAI API Enhancements |
|
Sign Up for Exclusive News, Tips & Releases
Be among the first to learn about everything new with the Intel® Distribution of OpenVINO™ toolkit. By signing up, you get early access product updates and releases, exclusive invitations to webinars and events, training and tutorial resources, contest announcements, and other breaking news.
Resources
Community and Support
Explore ways to get involved and stay up-to-date with the latest announcements.
Get Started
Optimize, fine-tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
The productive smart path to freedom from the economic and technical burdens of proprietary alternatives for accelerated computing.