Tackle LLM Hallucinations at Scale in the Enterprise
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
A critical need has developed in the industry to detect and eliminate LLM hallucinations, particularly at scale in enterprise configurations. One technique, neural chat for retrieval augmented generation (RAG) applications, remains an effective means of identifying hallucinations consistently and ensuring reliability of generated text. This webinar covers the concepts and methods of implementing the Hughes Hallucination Evaluation Model (HHEM) service and the results obtained from relying on this approach.
Through collaboration with Intel, the scoring system upon which HHEM is based has accrued positive results across the industry. The neural-chat-7b model from Intel, for example, has achieved the lowest hallucination rate on the Vectara* leaderboard of any model of its size. This webinar is designed for enterprise developers and architects, as well as leaders of generative AI and AI analytics.
Primary topics include:
- Learn why LLMs hallucinate and what methods exist to mitigate hallucinations.
- Gain familiarity with Vectara’s RAG-as-a-service.
- Understand what measures are incorporated in the scoring system used by Vectara’s HHEM.
- Evaluate the results in a number of real-world use cases.
- See what features Vectara Mockingbird offers in its RAG-specific output generation.
Skill level: Intermediate