Developer Resources from Intel and Prediction Guard*
Guard Your Data, Safeguard LLMs, and Unlock AI Value
Prediction Guard* and Intel collaborated to scale a private, end-to-end generative AI (GenAI) platform that:
- Provides access to LLMs, embeddings, vision models, and more
- Safeguards sensitive data
- Prevents common AI malfunctions
Businesses can access the platform as:
- A managed cloud offering running on Intel® Gaudi® 2 processors in Intel® Tiber™ Developer Cloud
- A self-hosted solution in their own Intel Tiber Developer Cloud account
- An on-premise GenAI solution running on Intel Gaudi processors, Intel® Xeon® processors, or Intel® Core™ Ultra processors
AI Developer Tools
AI Tools from Intel empower Prediction Guard to provide businesses with a secure, private platform for their AI development.
AI Tools
Accelerate end-to-end data science and machine learning pipelines using popular tools based on Python* and frameworks optimized for performance and interoperability.
Optimum* for Intel
This interface is part of the Hugging Face Optimum* library. It builds on top of the Intel® Neural Compressor and OpenVINO™ toolkit open source libraries to provide greater model compression and increased inference deployment speed. Use it to apply state-of-the-art optimization techniques such as quantization, pruning, and knowledge distillation for your transformer models with minimal effort.
Resources
Documentation
- Prediction Guard Documents
- Prediction Guard Lessens Risks for LLM Applications at Scale
- Technical and How-To Articles for AI Tools
- Optimum for Optimizing Models on Intel Gaudi Accelerators
- OpenVINO Toolkit for Optimizing Models and Serving on Intel Xeon Processors and CPUs
- Intel Gaudi Processors and Intel® AI Software
Training
- Get Started with AI Tools
- Host a Private Chat Interface for Your Company with Prediction Guard
- Text-to-SQL Webinar with Prediction Guard
- Introduction to Using LLMs
- Intel Gaudi Processors and Intel AI Software
- Webinar: How Prediction Guard Delivers Trustworthy AI on Intel Gaudi 2 AI Accelerators
- Video: Featured Customer Story on CNN
- Video: Revolutionize AI Safety and Accuracy (Intel Innovation 2023)
More Resources
AI Machine Learning Portfolio
Explore all Intel® AI content for developers.
AI Tools
Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Intel® AI Hardware
The Intel portfolio for AI hardware covers everything from data science workstations to data preprocessing, machine learning and deep learning modeling, and deployment in the data center and at the intelligent edge.
Intel® Gaudi® Processor
Intel and Hugging Face* (home of Transformer models) have joined forces to make it easier to quickly train high-quality Transformer models. Accelerate your Transformer model training on Intel® Gaudi® processors with just a few lines of code. The Hugging Face Optimum open source library combined with the Intel® Gaudi® software suite deliver greater productivity and lower overall costs to data scientists and machine learning engineers.
Customized for deep learning training, Intel Gaudi processors offer efficient results through AI performance and cost-effectiveness. New Amazon EC2* instances that feature these processors deliver up to 40 percent better price performance for training machine learning models than the latest GPU-based Amazon EC2 instances.
Resources
Training
- Host a Private Chat Interface for Your Company with Prediction Guard
- Text-to-SQL Webinar with Prediction Guard
- Introduction to Using LLMs
- Tutorial: Use Intel Gaudi Processors with TensorFlow*
- Tutorial: Use Intel Gaudi Processors with PyTorch*
- Intel Gaudi Processors and Intel® AI Software
- Webinar: How Prediction Guard Delivers Trustworthy AI on Intel Gaudi 2 AI Accelerators
- Video: Revolutionize AI Safety and Accuracy (Intel Innovation 2023)
Intel® Tiber® Developer Cloud
This resource gives developers access to Intel hardware, including the latest Intel Gaudi 2 AI accelerator.
Prediction Guard runs in production on Intel Tiber Developer Cloud, where you can host a secure, private version of Prediction Guard for a GenAI platform in your own infrastructure.
Resources
Training
- Host a Private Chat Interface for Your Company with Prediction Guard
- Text-to-SQL Webinar with Prediction Guard
- Introduction to Using LLMs
- Intel Tiber Developer Cloud Training Videos
- Webinar: How Prediction Guard Delivers Trustworthy AI on Intel Gaudi 2 AI Accelerators
- Video: Featured Customer Story on CNN
- Video: Revolutionize AI Safety and Accuracy (Intel Innovation 2023)
Product Support
Open Platform for Enterprise AI (OPEA)
OPEA streamlines the implementation of enterprise-grade GenAI. This open source ecosystem helps you efficiently integrate secure, performant, and cost-effective GenAI workflows into your process to create business value.
The OPEA platform includes:
- A detailed framework of composable building blocks for state-of-the-art GenAI systems including LLMs, data stores, and prompt engines.
- Architectural blueprints of RAG AI component stack structure and end-to-end workflows
- A four-step assessment for grading GenAI systems around performance, features, trustworthiness, and enterprise-grade readiness
Resources
Documentation
Product Support