Developer Reference for Intel® oneAPI Math Kernel Library for C

ID 766684
Date 10/31/2024
Public
Document Table of Contents

LAPACK Least Squares and Eigenvalue Problem Routines

This section includes descriptions of LAPACK computational routines and driver routines for solving linear least squares problems, eigenvalue and singular value problems, and performing a number of related computational tasks. For a full reference on LAPACK routines and related information see [LUG].

Least Squares Problems. A typical least squares problem is as follows: given a matrix A and a vector b, find the vector x that minimizes the sum of squares Σi((Ax)i - bi)2 or, equivalently, find the vector x that minimizes the 2-norm ||Ax - b||2.

In the most usual case, A is an m-by-n matrix with mn and rank(A) = n. This problem is also referred to as finding the least squares solution to an overdetermined system of linear equations (here we have more equations than unknowns). To solve this problem, you can use the QR factorization of the matrix A (see QR Factorization).

If m < n and rank(A) = m, there exist an infinite number of solutions x which exactly satisfy Ax = b, and thus minimize the norm ||Ax - b||2. In this case it is often useful to find the unique solution that minimizes ||x||2. This problem is referred to as finding the minimum-norm solution to an underdetermined system of linear equations (here we have more unknowns than equations). To solve this problem, you can use the LQ factorization of the matrix A (see LQ Factorization).

In the general case you may have a rank-deficient least squares problem, with rank(A)< min(m, n): find the minimum-norm least squares solution that minimizes both ||x||2 and ||Ax - b||2. In this case (or when the rank of A is in doubt) you can use the QR factorization with pivoting or singular value decomposition (see Singular Value Decomposition).

Eigenvalue Problems. The eigenvalue problems (from German eigen "own") are stated as follows: given a matrix A, find the eigenvaluesλ and the corresponding eigenvectorsz that satisfy the equation

Az = λz (right eigenvectors z)

or the equation

zHA = λzH (left eigenvectors z).

If A is a real symmetric or complex Hermitian matrix, the above two equations are equivalent, and the problem is called a symmetric eigenvalue problem. Routines for solving this type of problems are described in the topic Symmetric Eigenvalue Problems.

Routines for solving eigenvalue problems with nonsymmetric or non-Hermitian matrices are described in the topic Nonsymmetric Eigenvalue Problems.

The library also includes routines that handle generalized symmetric-definite eigenvalue problems: find the eigenvalues λ and the corresponding eigenvectors x that satisfy one of the following equations:

Az = λBz, ABz = λz, or BAz = λz,

where A is symmetric or Hermitian, and B is symmetric positive-definite or Hermitian positive-definite. Routines for reducing these problems to standard symmetric eigenvalue problems are described in the topic Generalized Symmetric-Definite Eigenvalue Problems.

To solve a particular problem, you usually call several computational routines. Sometimes you need to combine the routines of this chapter with other LAPACK routines described in "LAPACK Routines: Linear Equations" as well as with BLAS routines described in "BLAS and Sparse BLAS Routines".

For example, to solve a set of least squares problems minimizing ||Ax - b||2 for all columns b of a given matrix B (where A and B are real matrices), you can call ?geqrf to form the factorization A = QR, then call ?ormqr to compute C = QHB and finally call the BLAS routine ?trsm to solve for X the system of equations RX = C.

Another way is to call an appropriate driver routine that performs several tasks in one call. For example, to solve the least squares problem the driver routine ?gels can be used.