Intel® Tiber™ Edge Platform
Develop, deploy, run, manage, and scale AI and edge solutions on standard hardware with cloud-like simplicity.
Accelerated AI and App Development
The edge platform provides a curated set of components and intuitive workflows to accelerate AI model creation, AI model optimization, and application development.
AI Inferencing Optimized for the Edge
With the built-in AI runtime for the OpenVINO™ toolkit, this platform enables you to accelerate AI inference with lower latency and higher throughput while maintaining accuracy, reducing model footprint, and optimizing hardware use.
Flexible Development with an Open Ecosystem
Easily integrate with your existing brownfield resources with support for diverse hardware architectures, accelerators, third-party applications, optimizations, model repositories (such as Hugging Face* and Open Model Zoo), and AI training platforms.
Simplified Solution Management
The edge platform offers dynamic application deployment with zero-touch policy-based provisioning, orchestration, and life cycle management.
Accelerated Application Development
Develop Vision Models
- Create models for AI tasks, including classification, object detection, semantic segmentation, or anomaly detection.
- Annotate data with as little as 20 to 30 images, and then let active learning help you teach the model as it learns.
- Train your model into a multistep, smart application by chaining two or more tasks without the need to write additional code.
- Expedite data annotation and easily segment images with professional drawing features like a pencil, polygon tool, and OpenCV GrabCut.
- Output deep learning models in TensorFlow or PyTorch formats (where available) or as an optimized model for the OpenVINO toolkit to run on Intel® architecture CPUs, GPUs, and VPUs.
Optimize AI Models
- More than 200 pretrained models from Open Model Zoo for the OpenVINO toolkit for a wide variety of use cases
- Use optimizations directly from the Hugging Face repository for an expansive range of generative AI (GenAI) models and large language models (LLM)
- Option to import custom models from PyTorch*, TensorFlow*, and ONNX* (Open Neural Network Exchange)
- Built-in OpenVINO toolkit AI inference runtime optimizations and benchmarking
- Performance data for different topologies and layers
Develop Applications
- Standardized development interfaces: JupyterLab and Microsoft Visual Studio* code IDEs for elevated coding experience.
- Ready-to-use reference implementations: Preconfigured, use-case-specific applications with the complete stack of reusable software.
- OpenVINO toolkit samples and notebooks: Computer vision, generative AI, and LLM use cases.
- Diverse component integration: Importing source code and native applications, Docker* containers, and Helm* charts directly through any popular repositories.
Simplified Solution Management
After you develop your solution, this edge platform enables you to manage all your applications, infrastructure, and AI from a single pane of glass.
Single Pane of Glass
For day 0, 1, and 2 operations involving infrastructure, applications, and AI.
Dynamic Workload Placement
Tackle connectivity challenges and workload efficiencies between near and far edges.
Closed-Loop Automation
Track and automate application deployment based on policies and observability using deep hardware-aware telemetry.
Zero-Trust Security
Protect data and prevent incidents end-to-end.
Unlock the Possibilities at the Edge
Featured Solutions You Can Get Started with Today
Intel® Distribution of OpenVINO™ Toolkit
Run AI inference, optimize models, and deploy across multiple platforms.
Intel® Geti™ Software
Computer vision model development made simple with small data sets, active learning, intuitive UX, and built-in collaboration.
Streamline Your Path to Edge Innovation
Discover how this edge platform enables you to achieve lower total cost of ownership while overcoming the complexities that hold back edge solutions.