Gradient Boosting Optimizations from Intel
Fast Turnaround for Machine Learning Training and Inference
Discontinuation Notice
Intel® Developer Cloud for oneAPI is discontinued effective October 31, 2024. This includes all features and services associated with the platform. Intel® Tiber™ AI Cloud is an alternative for qualifying customers.
Speed Up Gradient Boosting Algorithms on Intel® Hardware
Gradient boosting is a machine learning ensemble technique that combines multiple weaker models to construct a robust prediction model.
XGBoost is a popular open source library for gradient boosting. Intel contributes software optimizations to XGBoost so you can maximize performance on Intel® hardware without any code changes.
Because machine learning inference often requires an extremely fast response, Intel developed a fast tree-inference capability in the daal4py library. With a few lines of code, you can:
- Convert your XGBoost, LightGBM, and CatBoost* gradient boosting models to daal4py.
- Speed up gradient boosting inference without sacrificing accuracy.
XGBoost optimizations and fast tree inference are part of the end-to-end suite of Intel® AI and machine learning development tools and resources.
Download the AI Tools
XGBoost optimizations from Intel and daal4py are available in the AI Tools Selector, which provides accelerated machine learning and data analytics pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Features
XGBoost Machine Learning Library
- Implement machine learning algorithms such as classification, regression, and ranking using gradient boosting.
- Perform parallel tree boosting to solve a wide variety of machine learning problems efficiently and accurately.
- Run single-node or distributed training.
Intel® Optimizations
- Speed up XGBoost histogram tree-building with automatic memory prefetching.
- Parallelize the XGBoost split function by automatically partitioning observations to multiple processing threads.
- Reduce memory consumption when building histograms.
Fast Tree Inference with daal4py
- Further accelerate XGBoost, LightGBM, and CatBoost model inference with daal4py, which uses the latest Intel® oneAPI Data Analytics Library (oneDAL) optimizations that are not yet ported to XGBoost.
- Reduce inference memory consumption and use L1 and L2 caches more efficiently.
- Get started with a couple lines of code:
import daal4py as d4p
d4p_model = d4p.mb.convert_model(xgb_model)
d4p_prediction = d4p_model.predict(test_data)
Benchmarks
Documentation & Code Samples
Demos
Faster XGBoost, LightGBM, and CatBoost* Inference on the CPU
Apply fast tree inference to speed up prediction speeds for popular gradient boosting techniques by up to 40x.
Deploy Cloud-Native, AI Workloads on AWS*
Use Kubernetes* to deploy and operationalize AI on Amazon Web Services (AWS)* clusters. This is illustrated with containerized training and inference of an XGBoost classifier for a loan default risk model.
Optimize Utility Maintenance Prediction for Better Service
Using the Predictive Asset Maintenance Reference Kit as an example, learn how to optimize the training cycles, prediction throughput, and accuracy of your machine learning workflow.
Python* Data Science at Scale
XGBoost optimizations for Intel® architecture are part of an accelerated end-to-end machine learning pipeline, demonstrated using the New York City taxi dataset.
Enhanced Fraud Detection Using Graph Neural Networks and XGBoost
This demonstration of the Fraud Detection reference kit shows how to combine graph neural networks to generate more expressive features for downstream fraud classification with XGBoost.
Accelerate XGBoost Gradient-Boosting Training and Inference
Learn how XGBoost optimizations for Intel architecture and the AI Tools help accelerate complex gradient boosting with large datasets.
Specifications
Processors:
- All CPUs with x86 architecture
- All integrated and discrete GPUs from Intel
Operating systems:
- Linux*
- Windows*
Language:
- Python
Get Help
Your success is our success. Access these support resources when you need assistance.
Stay Up to Date on AI Workload Optimizations
Sign up to receive hand-curated technical articles, tutorials, developer tools, training opportunities, and more to help you accelerate and optimize your end-to-end AI and data science workflows. Take a chance and subscribe. You can change your mind at any time.