Jump to content

LAPACK

From Wikipedia, the free encyclopedia
LAPACK (Netlib reference implementation)
Initial release1992;32 years ago(1992)
Stable release
3.12.0[1]Edit this on Wikidata / 24 November 2023;9 months ago(24 November 2023)
Repository
Written inFortran 90
TypeSoftware library
LicenseBSD-new
Websitenetlib.org/lapack/Edit this on Wikidata

LAPACK( "LinearAlgebraPackage ") is a standardsoftware libraryfornumerical linear algebra.It providesroutinesfor solvingsystems of linear equationsandlinear least squares,eigenvalue problems,andsingular value decomposition.It also includes routines to implement the associatedmatrix factorizationssuch asLU,QR,CholeskyandSchur decomposition.[2]LAPACK was originally written inFORTRAN 77,but moved toFortran 90in version 3.2 (2008).[3]The routines handle bothrealandcomplexmatrices in bothsingleanddouble precision.LAPACK relies on an underlyingBLASimplementation to provide efficient and portable computational building blocks for its routines.[2]: "The BLAS as the Key to Portability"

LAPACK was designed as the successor to the linear equations and linear least-squares routines ofLINPACKand the eigenvalue routines ofEISPACK.LINPACK,written in the 1970s and 1980s, was designed to run on the then-modernvector computerswith shared memory. LAPACK, in contrast, was designed to effectively exploit thecacheson modern cache-based architectures and theinstruction-level parallelismof modernsuperscalar processors,[2]: "Factors that Affect Performance"and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tunedBLASimplementation.[2]: "The BLAS as the Key to Portability"LAPACK has also been extended to run ondistributed memorysystems in later packages such asScaLAPACKand PLAPACK.[4]

Netlib LAPACK is licensed under a three-clauseBSD stylelicense, apermissive free software licensewith few restrictions.[5]

Naming scheme

[edit]

Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the firstFortranstandards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit.[2]: "Naming Scheme"

A LAPACK subroutine name is in the formpmmaaa,where:

  • pis a one-letter code denoting the type of numerical constants used.S,Dstand for realfloating-point arithmeticrespectively in single and double precision, whileCandZstand forcomplex arithmeticwith respectively single and double precision. The newer version, LAPACK95, usesgenericsubroutines in order to overcome the need to explicitly specify the data type.
  • mmis a two-letter code denoting the kind of matrix expected by the algorithm. The codes for the different kind of matrices are reported below; the actual data are stored in a different format depending on the specific kind; e.g., when the codeDIis given, the subroutine expects a vector of lengthncontaining the elements on the diagonal, while when the codeGEis given, the subroutine expects ann×narray containing the entries of the matrix.
  • aaais a one- to three-letter code describing the actual algorithm implemented in the subroutine, e.g.SVdenotes a subroutine to solvelinear system,whileRdenotes a rank-1 update.

For example, the subroutine to solve a linear system with a general (non-structured) matrix using real double-precision arithmetic is calledDGESV.[2]: "Linear Equations"

Matrix types in the LAPACK naming scheme
Name Description
BD bidiagonal matrix
DI diagonal matrix
GB generalband matrix
GE generalmatrix(i.e.,unsymmetric,in some cases rectangular)
GG general matrices, generalized problem (i.e., a pair of general matrices)
GT generaltridiagonal matrix
HB (complex)Hermitianband matrix
HE (complex)Hermitian matrix
HG upper Hessenberg matrix,generalized problem (i.e. a Hessenberg and atriangular matrix)
HP (complex)Hermitian,packed storage matrix
HS upper Hessenberg matrix
OP (real)orthogonal matrix,packed storage matrix
OR (real)orthogonal matrix
PB symmetric matrixorHermitian matrixpositive definiteband
PO symmetric matrixorHermitian matrixpositive definite
PP symmetric matrixorHermitian matrixpositive definite,packed storage matrix
PT symmetric matrixorHermitian matrixpositive definitetridiagonal matrix
SB (real)symmetricband matrix
SP symmetric,packed storage matrix
ST (real)symmetric matrixtridiagonal matrix
SY symmetric matrix
TB triangularband matrix
TG triangular matrices,generalized problem (i.e., a pair oftriangular matrices)
TP triangular,packed storage matrix
TR triangular matrix(or in some cases quasi-triangular)
TZ trapezoidal matrix
UN (complex)unitary matrix
UP (complex)unitary,packed storage matrix

Use with other programming languages and libraries

[edit]

Many programming environments today support the use of libraries withCbinding, allowing LAPACK routines to be used directly so long as a few restrictions are observed. Additionally, many other software libraries and tools for scientific and numerical computing are built on top of LAPACK, such asR,[6]MATLAB,[7]andSciPy.[8]

Several alternativelanguage bindingsare also available:

Implementations

[edit]

As with BLAS, LAPACK is sometimes forked or rewritten to provide better performance on specific systems. Some of the implementations are:

Accelerate
Apple's framework formacOSandiOS,which includes tuned versions ofBLASand LAPACK.[9][10]
Netlib LAPACK
The official LAPACK.
NetlibScaLAPACK
Scalable (multicore) LAPACK, built on top ofPBLAS.
Intel MKL
Intel's Math routines for their x86 CPUs.
OpenBLAS
Open-source reimplementation of BLAS and LAPACK.
Gonum LAPACK
A partial nativeGoimplementation.

Since LAPACK typically calls underlying BLAS routines to perform the bulk of its computations, simply linking to a better-tuned BLAS implementation can be enough to significantly improve performance. As a result, LAPACK is not reimplemented as often as BLAS is.

Similar projects

[edit]

These projects provide a similar functionality to LAPACK, but with a main interface differing from that of LAPACK:

Libflame
A dense linear algebra library. Has a LAPACK-compatible wrapper. Can be used with any BLAS, althoughBLISis the preferred implementation.[11]
Eigen
A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility.
MAGMA
Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated withGPGPUs.
PLASMA
The Parallel Linear Algebra for Scalable Multi-core Architectures(PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with adirected acyclic graph.[12]

See also

[edit]

References

[edit]
  1. ^"Release 3.12.0".24 November 2023.Retrieved19 December2023.
  2. ^abcdefAnderson, E.; Bai, Z.; Bischof, C.; Blackford, S.;Demmel, J.;Dongarra, J.;Du Croz, J.;Greenbaum, A.;Hammarling, S.; McKenney, A.; Sorensen, D. (1999).LAPACK Users' Guide(Third ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics.ISBN0-89871-447-8.Retrieved28 May2022.
  3. ^"LAPACK 3.2 Release Notes".16 November 2008.
  4. ^"PLAPACK: Parallel Linear Algebra Package".cs.utexas.edu.University of Texas at Austin.12 June 2007.Retrieved20 April2017.
  5. ^"LICENSE.txt".Netlib.Retrieved28 May2022.
  6. ^"R: LAPACK Library".stat.ethz.ch.Retrieved2022-03-19.
  7. ^"LAPACK in MATLAB".Mathworks Help Center.Retrieved28 May2022.
  8. ^"Low-level LAPACK functions".SciPy v1.8.1 Manual.Retrieved28 May2022.
  9. ^"Guides and Sample Code".developer.apple.Retrieved2017-07-07.
  10. ^"Guides and Sample Code".developer.apple.Retrieved2017-07-07.
  11. ^"amd/libflame: High-performance object-based library for DLA computations".GitHub.AMD. 25 August 2020.
  12. ^"ICL".icl.eecs.utk.edu.Retrieved2017-07-07.