Skip To Main Content
Intel logo - Return to the home page
My Tools

Select Your Language

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
Sign In to access restricted content

Using Intel.com Search

You can easily search the entire Intel.com site in several ways.

  • Brand Name: Core i9
  • Document Number: 123456
  • Code Name: Emerald Rapids
  • Special Operators: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

Quick Links

You can also try the quick links below to see results for most popular searches.

  • Product Information
  • Support
  • Drivers & Software

Recent Searches

Sign In to access restricted content

Advanced Search

Only search in

Sign in to access restricted content.

The browser version you are using is not recommended for this site.
Please consider upgrading to the latest version of your browser by clicking one of the following links.

  • Safari
  • Chrome
  • Edge
  • Firefox

How Prediction Guard Delivers Trustworthy AI on Intel® Gaudi® 2 AI Accelerators

@IntelDevTools

Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

Large language models (LLM) promise to revolutionize how enterprises operate, but making them production-ready means solving privacy risks, security vulnerabilities, and performance bottlenecks.

Not so easy.

This session focuses on how AI startup Prediction Guard found a solution to these challenges by using the processing power of Intel® Gaudi® 2 AI accelerators in the Intel® Tiber™ AI Cloud.1 The topics include: 

  • Prediction Guard’s pioneering work with hosting open source LLMs like Llama 2 and neural-chat-7B in a secure, privacy-preserving environment with filters for PII, prompt-injection attacks, toxic outputs, and factual inconsistencies.
  • How Prediction Guard optimized batching, model replication, tensor shaping, and hyperparameters for 2x throughput gains and industry-leading time to first token for streaming.
  • Architectural insights and best practices for capitalizing on LLMs.

Skill level: Expert

 

Featured Software

This session showcases the Intel Tiber AI Cloud: Learn More | Sign Up

 

Download Code Samples

Intel and Hugging Face* Neural-Chat-7B

See All Code Samples

 

Other Resources

Ecosystem Developer Hub

Intel® Liftoff for Startups

Jump to:


You May Also Like
 

   

You May Also Like

Related Articles

Prediction Guard Reduces Risks in LLM Applications

Trusted AI in the Intel Tiber AI Cloud

Seekr*: Build Trustworthy LLMs for Evaluating and Generating Content at Scale

Accelerate Meta* Llama 3 with Intel AI Solutions

Related Webinar

How to Use Intel-Optimized AI Software in the Cloud

  • Company Overview
  • Contact Intel
  • Newsroom
  • Investors
  • Careers
  • Corporate Responsibility
  • Inclusion
  • Public Policy
  • © Intel Corporation
  • Terms of Use
  • *Trademarks
  • Cookies
  • Privacy
  • Supply Chain Transparency
  • Site Map
  • Recycling
  • Your Privacy Choices California Consumer Privacy Act (CCPA) Opt-Out Icon
  • Notice at Collection

Intel technologies may require enabled hardware, software or service activation. // No product or component can be absolutely secure. // Your costs and results may vary. // Performance varies by use, configuration, and other factors. Learn more at intel.com/performanceindex. // See our complete legal Notices and Disclaimers. // Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Intel Footer Logo