Visible to Intel only — GUID: GUID-B7C9FECB-DB47-44DB-B55D-81420C84E2BF
Visible to Intel only — GUID: GUID-B7C9FECB-DB47-44DB-B55D-81420C84E2BF
Sparse BLAS Functionality
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Level 1
Functionality |
Operations |
CPU |
Intel GPU |
---|---|---|---|
Sparse Vector - Dense Vector addition (AXPY) |
y <- alpha * w + y |
No |
No |
Sparse Vector - Sparse Vector Dot product (SPDOT) (sv.sv -> sc) |
d <- dot(w,v) |
N/A |
N/A |
dot(w,v) = sum(wi* vi) |
No |
No |
|
dot(w,v) = sum(conj(wi) * vi) |
No |
No |
|
Sparse Vector - Dense Vector Dot product (SPDOT) (sv.dv -> sc) |
d <- dot(w,x) |
N/A |
N/A |
dot(w,v) = sum(wi* vi) |
No |
No |
|
dot(w,v) = sum(conj(wi) * vi) |
No |
No |
|
Dense Vector - Sparse Vector Conversion (sv <-> dv) |
— |
N/A |
N/A |
x = scatter(w) |
No |
No |
|
w = gather(x,windx) |
No |
No |
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Level 2
Functionality |
Operations |
CPU |
Intel GPU |
---|---|---|---|
General Matrix-Vector multiplication (GEMV) (sm*dv->dv) |
y <- beta*y + alpha * op(A)*x |
N/A |
N/A |
op(A) = A |
Yes |
Yes |
|
op(A) = AT |
Yes |
Yes |
|
op(A) = AH |
No |
No |
|
Symmetric Matrix-Vector multiplication (SYMV) (sm*dv->dv) |
y <- beta*y + alpha * op(A)*x |
N/A |
N/A |
op(A) = A |
Yes |
Yes |
|
op(A) = AT |
Yes |
Yes |
|
op(A) = AH |
No |
No |
|
Triangular Matrix-Vector multiplication (TRMV) (sm*dv->dv) |
y <- beta*y + alpha * op(A)*x |
N/A |
N/A |
op(A) = A |
Yes |
No |
|
op(A) = AT |
Yes |
No |
|
op(A) = AH |
No |
No |
|
General Matrix-Vector mult with dot product (GEMVDOT) (sm*dv -> dv, dv.dv->sc) |
y <- beta*y + alpha * op(A)*x, d = dot(x,y) |
N/A |
N/A |
op(A) = A |
Yes |
Yes |
|
op(A) = AT |
Yes |
Yes |
|
op(A) = AH |
No |
No |
|
Triangular Solve (TRSV) (inv(sm)*dv -> dv) |
solve for y, op(A)*y = alpha*x |
N/A |
N/A |
op(A) = A |
Yes |
Yes |
|
op(A) = AT |
Yes |
Yes |
|
op(A) = AH |
No |
No |
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Level 3
Functionality |
Operations |
CPU |
Intel GPU |
---|---|---|---|
General Sparse Matrix - Dense Matrix Multiplication (GEMM) (sm*dm->dm) |
Y <- alpha*op(A)*op(X) + beta*Y |
N/A |
N/A |
op(A) = A, op(X) = X |
Yes |
Yes |
|
op(A) = AT, op(X) = X |
Yes |
Yes |
|
op(A) = AH, op(X) = X |
Yes |
Yes |
|
op(A) = A, op(X) = XT |
No |
No |
|
op(A) = AT, op(X) = XT |
No |
No |
|
op(A) = A, op(X) = XH |
No |
No |
|
op(A) = AH |
No |
No |
|
op(A) = AT, op(X) = XH |
No |
No |
|
op(A) = AH, op(X) = XH |
No |
No |
|
General Dense Matrix - Sparse Matrix Multiplication (GEMM) (dm*sm->dm) |
Y <- alpha*op(X)*op(A) + beta*Y |
N/A |
N/A |
op(X) = X, op(A)=A |
No |
No |
|
op(X) = XH, op(A)=A |
No |
No |
|
op(X) = XH, op(A)=A |
No |
No |
|
op(X) = X, op(A)=AH |
No |
No |
|
op(X) = XH, op(A)=AH |
No |
No |
|
op(X) = XH, op(A)=AH |
No |
No |
|
op(X) = X, op(A)=AH |
No |
No |
|
op(X) = XH, op(A)=AH |
No |
No |
|
op(X) = XH, op(A)=AH |
No |
No |
|
General Sparse Matrix - Sparse Matrix Multiplication (GEMM) (sm*sm->sm) |
C <- alpha*op(A)*op(B) + beta*C |
N/A |
N/A |
op(A)=A, op(B)=B |
No |
No |
|
op(A)=AT, op(B)=B |
No |
No |
|
op(A)=AH, op(B)=B |
No |
No |
|
op(A)=A, op(B)=BT |
No |
No |
|
op(A)=AT, op(B)=BT |
No |
No |
|
op(A)=AH, op(B)=BT |
No |
No |
|
op(A)=A, op(B)=BH |
No |
No |
|
op(A)=AT, op(B)=BH |
No |
No |
|
op(A)=AH, op(B)=BH |
No |
No |
|
General Sparse Matrix - Sparse Matrix Multiplication (GEMM) (sm*sm->dm) |
Y <- alpha*op(A)*op(B) + beta*Y |
N/A |
N/A |
op(A)=A, op(B)=B |
No |
No |
|
op(A)=AT, op(B)=B |
No |
No |
|
op(A)=AH, op(B)=B |
No |
No |
|
op(A)=A, op(B)=BT |
No |
No |
|
op(A)=AT, op(B)=BT |
No |
No |
|
op(A)=AH, op(B)=BT |
No |
No |
|
op(A)=A, op(B)=BH |
No |
No |
|
op(A)=AT, op(B)=BH |
No |
No |
|
op(A)=AH, op(B)=BH |
No |
No |
|
Symmetric Rank-K update (SYRK) (sm*sm->sm) |
C <- op(A)*op(A)H |
N/A |
N/A |
op(A)=A |
No |
No |
|
op(A)=AT |
No |
No |
|
op(A)=AH |
No |
No |
|
Symmetric Rank-K update (SYRK) (sm*sm->dm) |
Y <- op(A)*op(A)H |
N/A |
N/A |
op(A)=A |
No |
No |
|
op(A)=AT |
No |
No |
|
op(A)=AH |
No |
No |
|
Symmetric Triple Product (SYPR) (op(sm)*sm*sm -> sm) |
C <- op(A)*B*op(A)H |
N/A |
N/A |
op(A)=A |
No |
No |
|
op(A)=AT |
No |
No |
|
op(A)=AH |
No |
No |
|
Triangular Solve (TRSM) (inv(sm)*dm -> dm) |
solve for Y, op(A)*Y = alpha*X |
N/A |
N/A |
op(A)=A |
No |
No |
|
op(A)=AT |
No |
No |
|
op(A)=AH |
No |
No |
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Other
Functionality |
Operations |
CPU |
Intel GPU |
---|---|---|---|
Symmetric Gauss-Seidel Preconditioner (SYMGS) (update A*x=b, A=L+D+U) |
x0 <- x*alpha; (L+D)*x1=b-U*x0; (U+D)*x=b-L*x1 |
No |
No |
Symmetric Gauss-Seidel Preconditioner with Matrix-Vector product (SYMGS_MV) (update A*x=b, A=L+D+U) |
x0 <- x*alpha; (L+D)*x1=b-U*x0; (U+D)*x=b-L*x1; y=A*x |
No |
No |
LU Smoother (LU_SMOOTHER) (update A*x=b, A=L+D+U, E~inv(D) ) |
r=b-A*x; (L+D)*E*(U+D)*dx=r; y=x+dr |
No |
No |
Sparse Matrix Add (ADD) |
C <- alpha*op(A) + B |
No |
No |
op(A) = AT |
No |
No |
|
op(A) = AH |
No |
No |
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Helper Functions
Functionality |
Operations |
CPU |
Intel GPU |
---|---|---|---|
Sort Indices of Matrix (ORDER) |
N/A |
No |
No |
Transpose of Sparse Matrix (TRANSPOSE) |
A <- op(A) with op=trans or conjtrans |
N/A |
N/A |
transpose CSR/CSC matrix |
No |
No |
|
transpose BSR matrix |
No |
No |
|
Sparse Matrix Format Converter (CONVERT) |
N/A |
No |
No |
Dense to Sparse Matrix Format Converter (CONVERT) |
N/A |
No |
No |
Copy Matrix Handle (COPY) |
N/A |
No |
No |
Create CSR Matrix Handle |
N/A |
Yes |
Yes |
Create CSC Matrix Handle |
N/A |
No |
No |
Create COO Matrix Handle |
N/A |
No |
No |
Create BSR Matrix Handle |
N/A |
No |
No |
Export CSR Matrix |
Allows access to internal data in the CSR Matrix handle |
No |
No |
Export CSC Matrix |
Allows access to internal data in the CSC Matrix handle |
No |
No |
Export COO Matrix |
Allows access to internal data in the COO Matrix handle |
No |
No |
Export BSR Matrix |
Allows access to internal data in the BSR Matrix handle |
No |
No |
Set Value in Matrix |
N/A |
No |
N |
In the following table for functionality, sm = sparse matrix, dm = dense matrix, sv = sparse vector, dv = dense vector, sc = scalar.
In the following table for operations, dense vectors = x, y, sparse vectors = w,v, dense matrices = X,Y, sparse matrices = A, B, C, and scalars = alpha, beta, d.
Optimize Stages
Functionality |
Operations |
CPU |
Intel GPU |
---|---|---|---|
add MEMORY hint and optimize |
Chooses to allow larger memory requiring optimizations or not. |
No |
No |
Add GEMV hint and optimize |
N/A |
Yes |
No |
Add SYMV hint and optimize |
N/A |
Yes |
No |
Add TRMV hint and optimize |
N/A |
Yes |
No |
add TRSV hint and optimize |
N/A |
Yes |
No |
add GEMM hint and optimize |
N/A |
Yes |
No |
add TRSM hint and optimize |
N/A |
No |
No |
add DOTMV hint and optimize |
N/A |
Yes |
No |
add SYMGS hint and optimize |
N/A |
No |
No |
add SYMGS_MV hint and optimize |
N/A |
No |
No |
add LU_SMOOTHER hint and optimize |
N/A |
No |
No |