Visible to Intel only — GUID: GUID-9D87D4DD-B83C-4619-A3A5-34887900CB13
Visible to Intel only — GUID: GUID-9D87D4DD-B83C-4619-A3A5-34887900CB13
Logistic Regression
This chapter describes the Logistic Regression algorithm implemented in oneDAL.
The Logistic Regression algorithm solves the classification problem and predicts class labels and probabilities of objects belonging to each class.
Operation |
Computational methods |
Programming Interface |
||
Mathematical Formulation
Training
Given n feature vectors of size p and n responses , the problem is to fit the model weights to minimize Logistic Loss . Where * - predicted probabilities, * - a sigmoid function. Note that probabilities are binded to interval to avoid problems with computing log function ( if float type is used and otherwise)
Training Method: dense_batch
Since Logistic Loss is a convex function, you can use one of the iterative solvers designed for convex problems for minimization. During training, the data is divided into batches, and the gradients from each batch are summed up.
Refer to Mathematical formulation: Newton-CG.
Inference
Given r feature vectors of size p, the problem is to calculate the probabilities of associated with these feature vectors belonging to each class and determine the most probable class label for each object.
The probabilities are calculated using this formula . Where is a sigmoid function. If the probability is bigger than 0.5 then class label is set to 1, otherwise to 0.
Programming Interface
Refer to API Reference: Logistic Regression.
Examples: Logistic Regression
oneAPI DPC++
- Batch Processing: