Intel® oneAPI DPC++/C++ Compiler Developer Guide and Reference

ID 767253
Date 3/22/2024
Public

A newer version of this document is available. Customers should click here to go to the newest version.

Document Table of Contents

fprofile-ml-use

Enables the use of a pre-trained machine learning model to predict branch execution probabilities driving profile-guided optimizations.

Syntax

Linux:

-fprofile-ml-use

Windows:

/fprofile-ml-use

Arguments

None

Default

OFF

The compiler follows default static heuristics for profile-guided optimizations.

Description

This option enables the use of a pre-trained machine learning model to predict branch execution probabilities driving profile-guided optimizations.

It replaces the default static heuristics in the compiler and serves as a single-pass proxy to get the performance gains from the true 2-pass profiling methods by instrumentation/sampling.

NOTE:

This option only applies to host compilation. When offloading is enabled, it does not impact device-specific compilation.

IDE Equivalent

Visual Studio: DPC++ > Optimization > Use Pre-trained Machine Learning Model for Profile Guided Optimizations

C/C++ -> Optimization [Intel C++] > Use Pre-trained Machine Learning Model for Profile Guided Optimizations

Eclipse: Intel® oneAPI DPC++ Compiler > Optimization > Use Pre-trained Machine Learning Model for Profile Guided Optimizations (-fprofile-ml-use)

Intel C++ Compiler > Optimization > Use Pre-trained Machine Learning Model for Profile Guided Optimizations

Alternate Options

None

Examples

The following shows examples of using this option:

Linux

icx   -c  -fprofile-ml-use t.c 
icpx  -c  -fprofile-ml-use  t.cpp 

Windows

icx   /c  /fprofile-ml-use t.c 
icpx  /c  /fprofile-ml-use  t.cpp