• Create models for AI tasks, including classification, object detection, semantic segmentation, or anomaly detection.
  • Annotate data with as little as 20 to 30 images, and then let active learning help you teach the model as it learns.
  • Train your model into a multistep, smart application by chaining two or more tasks without the need to write additional code.
  • Expedite data annotation and easily segment images with professional drawing features like a pencil, polygon tool, and OpenCV GrabCut.
  • Output deep learning models in TensorFlow or PyTorch formats (where available) or as an optimized model for the OpenVINO toolkit to run on Intel® architecture CPUs, GPUs, and VPUs.
     
 
  • More than 200 pretrained models from Open Model Zoo for the OpenVINO toolkit for a wide variety of use cases
  • Use optimizations directly from the Hugging Face repository for an expansive range of generative AI (GenAI) models and large language models (LLM)
  • Option to import custom models from PyTorch*, TensorFlow*, and ONNX* (Open Neural Network Exchange)
  • Built-in OpenVINO toolkit AI inference runtime optimizations and benchmarking
  • Performance data for different topologies and layers
     
 
  • Standardized development interfaces: JupyterLab and Microsoft Visual Studio* code IDEs for elevated coding experience.
  • Ready-to-use reference implementations: Preconfigured, use-case-specific applications with the complete stack of reusable software.
  • OpenVINO toolkit samples and notebooks: Computer vision, generative AI, and LLM use cases.
  • Diverse component integration: Importing source code and native applications, Docker* containers, and Helm* charts directly through any popular repositories.

 

{"limitDisplayedContent":"showAll","collectionRelationTags":{"relations":{"AND":["etm-454697a6b0ca41e8a0f6606316e92b7c","etm-edc5833b87634264b1c6a942a48cfdb6","etm-1a133ac8ca014bf1aa9800e190345ba2","etm-b21347d4d9474d5081968efe45da1418"],"Child":["822587","822589","831072","836202"]},"featuredIds":[]},"collectionId":"822461","resultPerPage":4.0,"filters":[],"coveoRequestHardLimit":"1000","accessDetailsPagePath":"/content/www/us/en/secure/design/internal/access-details.html","collectionGuids":["etm-454697a6b0ca41e8a0f6606316e92b7c","etm-1a133ac8ca014bf1aa9800e190345ba2","etm-edc5833b87634264b1c6a942a48cfdb6","etm-b21347d4d9474d5081968efe45da1418"],"cardView":true,"sorting":"Newest","defaultImagesPath":"/content/dam/www/public/us/en/images/uatable/default-icons","coveoMaxResults":5000,"coveoSplitSize":0,"fpgaFacetRootPaths":"{\"fpgadevicefamily\":[\"Primary Content Tagging\",\"Intel® FPGAs\",\"Intel® Programmable Devices\"],\"quartusedition\":[\"Primary Content Tagging\",\"Intel® FPGAs\",\"Intel® Quartus Software\"],\"quartusaddon\":[\"Primary Content Tagging\",\"Intel® FPGAs\",\"Intel® Quartus Software - Add-ons\"],\"fpgaplatform\":[\"Primary Content Tagging\",\"Intel® FPGAs\",\"Intel® FPGA Platforms\"]}","newWrapperPageEnabled":true,"descendingSortingForNumericalFacetsName":"[\"Intel® Quartus® Prime Pro Edition\",\"Intel® Quartus® Prime Lite Edition\",\"Intel® Quartus® Prime Standard Edition\",\"Quartus® II Subscription Edition\",\"Quartus® II Web Edition\"]","columnsConfiguration":{"idColumn":false,"dateColumn":false,"versionColumn":false,"contentTypeColumn":false,"columnsMaxSize":0},"dynamicColumnsConfiguration":[],"updateCollateralMetadataEnabled":true,"relatedAssetsEnable":true,"disableExpandCollapseAll":false,"enableRelatedAssetsOnExpandAll":false,"disableBlueBanner":false,"isICS":false}