Intel Xeon Processors Accelerate GenAI Workloads with Aible

For customers running GenAI workloads, Aible’s serverless solutions lower costs, embed intelligence and improve efficiency for RAG and fine-tuning on Intel Xeon processors.

News

  • June 26, 2024

  • Contact Intel PR

  • Follow Intel Newsroom on social:

    Twitter logo
    YouTube Icon

author-image

By

What’s New: Intel and Aible, an end-to-end serverless generative AI (GenAI) and augmented analytics enterprise solution, now offer solutions to shared customers to run advanced GenAI and retrieval-augmented generation (RAG) use cases on multiple generations of Intel® Xeon® CPUs. The collaboration, which includes engineering optimizations and a benchmarking program, enhances Aible’s ability to deliver GenAI results at a low cost for enterprise customers and helps developers embed AI intelligence into applications. Together, the companies offer scalable and efficient AI solutions that draw on high-performing hardware to help customers solve challenges with AI and Intel.

“Customers are looking for efficient, enterprise-grade solutions to harness the power of AI. Our collaboration with Aible shows how we’re closely working with the industry to deliver innovation in AI and lowering the barrier to entry for many customers to run the latest GenAI workloads using Intel Xeon processors.”

–Mishali Naik, Intel senior principal engineer, Data Center and AI Group

About Xeon’s GenAI Performance: Aible’s solutions demonstrate how CPUs can significantly enhance performance across a range of the latest AI workloads, from running language models to RAG. Optimized for Intel processors, Aible’s technology utilizes an efficient serverless end-to-end approach for AI, consuming resources only when there are active user requests. For example, the vector database activates for just a few seconds to retrieve information relevant to a user query, and the language model similarly powers up briefly to process and respond to the request. This on-demand operation helps reduce the total cost of ownership (TCO).

While RAG is often implemented using GPUs (graphics processing units) and accelerators to leverage their parallel processing capabilities, Aible’s serverless technique, combined with Intel® Xeon® Scalable processors, allows RAG use cases to be powered entirely by CPUs. The performance data shows that multiple generations of Intel Xeon processors can run RAG workloads efficiently.

Results may vary. Configuration details below.

Why It MattersAible enables customers to lower the operational costs of GenAI projects by exclusively utilizing CPUs in serverless form to share the same underlying compute resources more securely across multiple customers. As a comparison, the lowered operational costs can be compared to buying electricity when it’s used rather than renting an electricity generator. Moreover, as demand for generative AI grows, the need to optimize both performance and energy consumption becomes more crucial. Aible's CPU-based services offer customers a cost-effective and energy-efficient solution.

How Aible Solutions Help Customers Lower Costs: According to Aible’s benchmark analysis, customers can realize up to a 55x cost saving when running RAG models on their CPU-based serverless solutions1. This cost reduction is a testament to the effectiveness of Aible's CPU-exclusive approach, which sidesteps the need for more expensive GPU-based infrastructures with shared services or dedicated servers.

How Intel Collaborates with Aible: Intel – including Intel Labs – has worked with Aible to optimize AI workloads on Xeon processors. Notably, by optimizing Aible’s code for AVX-512, Aible saw significant performance gains and improved its throughput on Xeon processors, highlighting the impact of strategic software optimizations on overall efficiency.

The combination of RAG models with Intel Xeon processors, facilitated by platforms like Aible, can enable applications such as:

 

  • Natural language processing (NLP)
  • Recommendation systems
  • Decision support systems
  • Content generation

 

Intel’s collaboration with Aible began with the launch of 4th Gen Xeon processors. The two companies have since optimized AI workloads, code and libraries for Xeon processors to increase performance for Aible’s product offerings.

What’s Next: Intel and Aible will demonstrate their solutions at the Amazon Web Services Summit in Washington, D.C., on June 26 and 27. Aible’s solutions  run on AWS Lambda and are available in the AWS Marketplace.

More ContextRead the full report (Aible.com) | 30 Days to AI Value: Development Best Practices from Intel and Aible (Intel.com) | Impact from AI in 30 Days (Aible Case Study) | Intel AI Analytics Toolkit

The Small Print:

1 Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.

Configuration details:

1-node, 2x Intel(R) Xeon(R) Platinum 8280L CPU @ 2.70GHz, 28 cores, HT On, Turbo On, NUMA 2, Integrated Accelerators Available [used]: DLB 0 [0], DSA 0 [0], IAA 0 [0], QAT 0 [0], Total Memory 384GB (12x32GB DDR4 2933 MT/s [2934 MT/s]), BIOS SE5C620.86B.02.01.0017.110620230543, microcode 0x5003604, 2x Ethernet Connection X722 for 10GBASE-T, 1x 894.3G INTEL SSDSC2KB96, 1x 1.8T INTEL SSDPE2KX020T8, 2x 3.7T INTEL SSDPE2KX040T8, Red Hat Enterprise Linux 8.9 (Ootpa), 4.18.0-513.18.1.el8_9.x86_64, WORKLOAD=Aible End-to-end RAG-LLM, Model=Mistral-7B-OpenOrca-GGUF, all-MiniLM-L6-v2, gcc 12.2.0,  IntelLLVM 2024.0.2, llama.cpp, ChromaDB, Langchain, oneAPI base container 2024.0.1-devel-ubuntu22.04. Tested by Intel on 03/07/24.

1-node, 2x Intel(R) Xeon(R) Platinum 8462Y+, 32 cores, HT On, Turbo On, NUMA 2, Integrated Accelerators Available [used]: DLB 2 [0], DSA 2 [0], IAA 2 [0], QAT 2 [0], Total Memory 512GB (16x32GB DDR5 4800 MT/s [4800 MT/s]), BIOS 05.12.00, microcode 0x2b0004d0, 2x BCM57416 NetXtreme-E Dual-Media 10G RDMA Ethernet Controller, 2x Ethernet Controller E810-C for QSFP, 2x 3.5T SAMSUNG MZQL23T8HCLS-00B7C, 1x 1.8T SAMSUNG MZ1L21T9HCLS-00A07, Red Hat Enterprise Linux 8.9 (Ootpa), 4.18.0-513.18.1.el8_9.x86_64, WORKLOAD=Aible End-to-end RAG-LLM, Model=Mistral-7B-OpenOrca-GGUF, all-MiniLM-L6-v2, gcc 12.2.0,  IntelLLVM 2024.0.2, llama.cpp, ChromaDB, Langchain, oneAPI base container 2024.0.1-devel-ubuntu22.05. Tested by Intel on 03/07/24.

1-node, 2x INTEL(R) XEON(R) PLATINUM 8562Y+, 32 cores, HT On, Turbo On, NUMA 2, Integrated Accelerators Available [used]: DLB 2 [0], DSA 2 [0], IAA 2 [0], QAT 2 [0], Total Memory 512GB (16x32GB DDR5 5600 MT/s [5600 MT/s]), BIOS 3B05.TEL4P1, microcode 0x21000161, 2x Ethernet Controller X710 for 10GBASE-T, 2x Ethernet Controller E810-C for QSFP, 1x 894.3G INTEL SSDSC2KG96, 1x 3.5T SAMSUNG MZQL23T8HCLS-00A07, 3x 3.5T SAMSUNG MZQL23T8HCLS-00B7C, Red Hat Enterprise Linux 8.9 (Ootpa), 4.18.0-513.18.1.el8_9.x86_64, WORKLOAD=Aible End-to-end RAG-LLM, Model=Mistral-7B-OpenOrca-GGUF, all-MiniLM-L6-v2, gcc 12.2.0,  IntelLLVM 2024.0.2, llama.cpp, ChromaDB, Langchain, oneAPI base container 2024.0.1-devel-ubuntu22.06. Tested by Intel on 03/07/24.