Jump to content

Basic Linear Algebra Subprograms: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Undid revision 684898900 by 193.175.4.223 (talk) not BLAS but a switch
Abionab (talk | contribs)
No edit summary
Line 56: Line 56:
where {{math|'''''T'''''}} is a triangular matrix, among other functionality.
where {{math|'''''T'''''}} is a triangular matrix, among other functionality.


Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS,<ref>{{cite journal |last1=Goto |first1=Kazushige |first2=Robert |last2=van de Geijn |title=High-performance implementation of the level-3 BLAS |journal=ACM Transactions on Mathematical Software |volume=35 |number=1 |year=2008 |url=ftp://ftp.cs.utexas.edu/pub/techreports/tr06-23.pdf}}</ref> and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, <code>gemm</code> is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of {{math|'''''A'''''}}, {{math|'''''B'''''}} into [[Block matrix|block matrices]], <code>gemm</code> can be [[Matrix multiplication algorithm#Divide and conquer|implemented recursively]]. This is one of the motivations for including the {{math|''β''}} parameter,{{dubious|Reason for beta parameter|date=January 2015}} so the results of previous blocks can be accumulated. Note that this decomposition requires the special case {{math|''β'' {{=}} 1}} which many implementations optimize for, thereby eliminating one multiplication for each value of {{math|'''''C'''''}}. This decomposition allows for better [[locality of reference]] both in space and time of the data used in the product. This, in turn, takes advantage of the [[CPU cache|cache]] on the system.<ref>{{Citation | last1=Golub | first1=Gene H. | author1-link=Gene H. Golub | last2=Van Loan | first2=Charles F. | author2-link=Charles F. Van Loan | title=Matrix Computations | publisher=Johns Hopkins | edition=3rd | isbn=978-0-8018-5414-9 | year=1996}}</ref> For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as [[Automatically Tuned Linear Algebra Software|ATLAS]]. More recently, implementations by [[Kazushige Goto]] have shown that blocking only for the [[L2 cache]], combined with careful [[amortized analysis|amortizing]] of copying to contiguous memory to reduce [[translation lookaside buffer|TLB]] misses, is superior to [[Automatically Tuned Linear Algebra Software|ATLAS]]. A highly tuned implementation based on these ideas is part of the [[GotoBLAS]].
Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS,<ref>{{cite journal |last1=Goto |first1=Kazushige |first2=Robert |last2=van de Geijn |title=High-performance implementation of the level-3 BLAS |journal=ACM Transactions on Mathematical Software |volume=35 |number=1 |year=2008 |url=ftp://ftp.cs.utexas.edu/pub/techreports/tr06-23.pdf}}</ref> and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, <code>gemm</code> is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of {{math|'''''A'''''}}, {{math|'''''B'''''}} into [[Block matrix|block matrices]], <code>gemm</code> can be [[Matrix multiplication algorithm#Divide and conquer|implemented recursively]]. This is one of the motivations for including the {{math|''β''}} parameter,{{dubious|Reason for beta parameter|date=January 2015}} so the results of previous blocks can be accumulated. Note that this decomposition requires the special case {{math|''β'' {{=}} 1}} which many implementations optimize for, thereby eliminating one multiplication for each value of {{math|'''''C'''''}}. This decomposition allows for better [[locality of reference]] both in space and time of the data used in the product. This, in turn, takes advantage of the [[CPU cache|cache]] on the system.<ref>{{Citation | last1=Golub | first1=Gene H. | author1-link=Gene H. Golub | last2=Van Loan | first2=Charles F. | author2-link=Charles F. Van Loan | title=Matrix Computations | publisher=Johns Hopkins | edition=3rd | isbn=978-0-8018-5414-9 | year=1996}}</ref> For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as [[Automatically Tuned Linear Algebra Software|ATLAS]]. More recently, implementations by [[Kazushige Goto]] have shown that blocking only for the [[L2 cache]], combined with careful [[amortized analysis|amortizing]] of copying to contiguous memory to reduce [[translation lookaside buffer|TLB]] misses, is superior to [[Automatically Tuned Linear Algebra Software|ATLAS]]. A highly tuned implementation based on these ideas is part of the [[GotoBLAS]] and [[OpenBLAS]].


==Implementations==
==Implementations==
Line 74: Line 74:
;Netlib BLAS: The official reference implementation on [[Netlib]], written in [[Fortran|Fortran 77]].<ref>http://www.netlib.org/blas/</ref>
;Netlib BLAS: The official reference implementation on [[Netlib]], written in [[Fortran|Fortran 77]].<ref>http://www.netlib.org/blas/</ref>
;Netlib CBLAS: Reference [[C (programming language)|C]] interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C.<ref>http://www.netlib.org/blas</ref>
;Netlib CBLAS: Reference [[C (programming language)|C]] interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C.<ref>http://www.netlib.org/blas</ref>
;[[OpenBLAS]]: Optimized BLAS based on Goto BLAS hosted at [http://www.github.com/ GitHub], supporting [[Intel Sandy Bridge]] and [[MIPS64|MIPS_architecture]] [[Loongson]] processors.<ref>http://xianyi.github.com/OpenBLAS/</ref>
;[[OpenBLAS]]: Optimized BLAS based on GotoBLAS hosted at GitHub<ref>[https://github.com/xianyi/OpenBLAS xianyi/OpenBLAS - GitHub]</ref>, supporting [[x86]], [[x86-64]], [[MIPS]], [[ARM architecture|ARM]], and ARM64 processors.<ref>[http://www.openblas.net/ OpenBLAS : An optimized BLAS library]</ref>
;PDLIB/SX: [[NEC Corporation|NEC]]'s Public Domain Mathematical Library for the NEC [[NEC SX architecture|SX-4]] system.<ref>http://www.nec.co.jp/hpc/mediator/sxm_e/software/61.html</ref>
;PDLIB/SX: [[NEC Corporation|NEC]]'s Public Domain Mathematical Library for the NEC [[NEC SX architecture|SX-4]] system.<ref>http://www.nec.co.jp/hpc/mediator/sxm_e/software/61.html</ref>
;SCSL: [[Silicon Graphics|SGI]]'s Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's [[Irix]] workstations.<ref>http://www.sgi.com/products/software/scsl.html</ref>
;SCSL: [[Silicon Graphics|SGI]]'s Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's [[Irix]] workstations.<ref>http://www.sgi.com/products/software/scsl.html</ref>
Line 84: Line 84:
;CUDA SDK: The NVIDIA [[CUDA]] SDK includes BLAS functionality for writing C programs that runs on [[GeForce 8 Series]] or newer graphics cards.
;CUDA SDK: The NVIDIA [[CUDA]] SDK includes BLAS functionality for writing C programs that runs on [[GeForce 8 Series]] or newer graphics cards.
;Eigen: The Eigen template library provides an easy to use highly generic C++ template interface to matrix/vector operations and related algorithms like solving algorithms, decompositions etc. It uses vector capabilities and is optimized for both fixed size and dynamic sized and sparse matrices.<ref>http://eigen.tuxfamily.org</ref>
;Eigen: The Eigen template library provides an easy to use highly generic C++ template interface to matrix/vector operations and related algorithms like solving algorithms, decompositions etc. It uses vector capabilities and is optimized for both fixed size and dynamic sized and sparse matrices.<ref>http://eigen.tuxfamily.org</ref>
;Elemental: Elemental is a open source software for [[distributed memory|distributed-memory]] dense and sparse-direct linear algebra and optimization.<ref>[http://libelemental.org/ Elemental: distributed-memory dense and sparse-direct linear algebra and optimization — Elemental]</ref>
;GSL: The [[GNU Scientific Library]] Contains a multi-platform implementation in C which is distributed under the [[GNU]] [[General Public License]].
;GSL: The [[GNU Scientific Library]] Contains a multi-platform implementation in C which is distributed under the [[GNU]] [[General Public License]].
;HASEM: is a C++ template library, being able to solve linear equations and to compute eigenvalues. It is licensed under BSD License.<ref>http://sourceforge.net/projects/hasem/</ref>
;HASEM: is a C++ template library, being able to solve linear equations and to compute eigenvalues. It is licensed under BSD License.<ref>http://sourceforge.net/projects/hasem/</ref>

Revision as of 01:57, 29 October 2015

BLAS (Basic Linear Algebra Subprograms) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

It originated as a Fortran library in 1979[1] and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the Netlib website. This Fortran library is known as the reference implementation (sometimes confusingly referred to as the BLAS library) and is not optimized for speed.

Most libraries that offer linear algebra routines conform to the BLAS interface, allowing library users to develop programs that are agnostic of the BLAS library being used. Examples of such libraries include: AMD Core Math Library (ACML), ATLAS, Intel Math Kernel Library (MKL), and OpenBLAS. ACML is no longer supported.[2] MKL is a freeware[3] and proprietary[4] vendor library optimized for x86 and x86-64 with a performance emphasis on Intel processors.[5] OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. The LINPACK benchmarks rely heavily on the BLAS routine gemm for its performance measurements.

Much numerical software uses BLAS-compatible libraries to do linear algebra computations, including Armadillo, LAPACK, LINPACK, GNU Octave, Mathematica,[6] MATLAB,[7] NumPy,[8] and R.

Background

With the advent of numerical programming, sophisticated subroutine libraries became useful. These libraries would contain subroutines for common high-level mathematical operations such as root finding, matrix inversion, and solving systems of equations. The language of choice was FORTRAN. The most prominent numerical programming library was IBM's Scientific Subroutine Package (SSP).[9] These subroutine libraries allowed programmers to concentrate on their specific problems and avoid re-implementing well-known algorithms. The library routines would also be better than average implementations; matrix algorithms, for example, might use full pivoting to get better numerical accuracy. The library routines would also have more efficient routines. For example, a library may include a program to solve a matrix that is upper triangular. The libraries would include single-precision and double-precision versions of some algorithms.

Initially, these subroutines used hard-coded loops for their low-level operations. For example, if a subroutine need to perform a matrix multiplication, then the subroutine would have three nested loops. Linear algebra programs have many common low-level operations (the so-called "kernel" operations, not related to operating systems).[10] Between 1973 and 1977, several of these kernel operations were identified.[11] These kernel operations became defined subroutines that math libraries could call. The kernel calls had advantages over hard-coded loops: the library routine would be more readable, there were fewer chances for bugs, and the kernel implementation could be optimized for speed. A specification for these kernel operations using scalars and vectors, the level-1 Basic Linear Algebra Subroutines (BLAS), was published in 1979.[12] BLAS was used to implement the linear algebra subrouting library LINPACK.

The BLAS abstraction allows customization for high performance. For example, LINPACK is a general purpose library that can be used on many different machines without modification. LINPACK could use a generic version of BLAS. To gain performance, different machines might use tailored versions of BLAS. As computer architectures became more sophisticated, vector machines appeared. BLAS for a vector machine could use the machine's fast vector operations. (While vector processors eventually fell out of favor, vector instructions in modern CPUs are essential for optimal performance in BLAS routines.)

Other machine features became available and could also be exploited. Consequently, BLAS was augmented from 1984 to 1986 with level-2 kernel operations that concerned vector-matrix operations. Memory hierarchy was also recognized as something to exploit. Many computers have cache memory that is much faster than main memory; keeping matrix manipulations localized allows better usage of the cache. In 1987 and 1988, the level 3 BLAS were identified to do matrix-matrix operations. The level 3 BLAS encouraged block-partitioned algorithms. The LAPACK library uses level 3 BLAS.[13]

The original BLAS concerned only densely stored vectors and matrices. Further extensions to BLAS, such as for sparse matrices, have been addressed.[14]

ATLAS

Automatically Tuned Linear Algebra Software (ATLAS) attempts to make a BLAS implementation with higher performance. ATLAS defines many BLAS operations in terms of some core routines and then tries to automatically tailor the core routines to have good performance. A search is performed to choose good block sizes. The block sizes may depend on the computer's cache size and architecture. Tests are also made to see if copying arrays and vectors improves performance. For example, it may be advantageous to copy arguments so that they are cache-line aligned so user-supplied routines can use SIMD instructions.

Functionality

BLAS functionality is categorized into three sets of routines called "levels", which correspond to both the chronological order of definition and publication, as well as the degree of the polynomial in the complexities of algorithms; Level 1 BLAS operations typically take linear time, O(n), Level 2 operations quadratic time and Level 3 operations cubic time.[15] Modern BLAS implementations typically provide all three levels.

Level 1

This level consists of the all the routines described in the original presentation of BLAS (1979),[1] which defined only vector operations on strided arrays: dot products, vector norms, a generalized vector addition of the form

(called "axpy") and several other operations.

Level 2

This level contains matrix-vector operations including a generalized matrix-vector multiplication (gemv):

as well as a solver for x in the linear equation

with T being triangular, among other things. Design of the Level 2 BLAS started in 1984, with results published in 1988.[16] The Level 2 subroutines are especially intended to improve performance of programs using BLAS on vector processors, where Level 1 BLAS are suboptimal "because they hide the matrix-vector nature of the operations from the compiler."[16]

Level 3

This level, formally published in 1990,[15] contains matrix-matrix operations, including a "general matrix multiplication" (gemm), of the form

where A and B can optionally be transposed or hermitian-conjugated inside the routine and all three matrices may be strided. The ordinary matrix multiplication A B can be performed by setting α to one and C to an all-zeros matrix of the appropriate size.

Also included in Level 3 are routines for solving

where T is a triangular matrix, among other functionality.

Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS,[17] and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, gemm is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of A, B into block matrices, gemm can be implemented recursively. This is one of the motivations for including the β parameter,[dubiousdiscuss] so the results of previous blocks can be accumulated. Note that this decomposition requires the special case β = 1 which many implementations optimize for, thereby eliminating one multiplication for each value of C. This decomposition allows for better locality of reference both in space and time of the data used in the product. This, in turn, takes advantage of the cache on the system.[18] For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as ATLAS. More recently, implementations by Kazushige Goto have shown that blocking only for the L2 cache, combined with careful amortizing of copying to contiguous memory to reduce TLB misses, is superior to ATLAS. A highly tuned implementation based on these ideas is part of the GotoBLAS and OpenBLAS.

Implementations

Accelerate
Apple's framework for Mac OS X and iOS, which includes tuned versions of BLAS and LAPACK.[19][20]
ACML
The AMD Core Math Library, supporting the AMD Athlon and Opteron CPUs under Linux and Windows.[21]
C++ AMP BLAS
The C++ AMP BLAS Library is an open source implementation of BLAS for Microsoft's AMP language extension for Visual C++.[22]
ATLAS
Automatically Tuned Linear Algebra Software, an open source implementation of BLAS APIs for C and Fortran 77.[23]
BLIS
BLAS-like Library Instantiation Software framework for rapid instantiation.[24]
cuBLAS
Optimized BLAS for NVIDIA based GPU cards.[25]
clBLAS
An OpenCL implementation of BLAS.[26]
Eigen BLAS
A Fortran 77 and C BLAS library implemented on top of the open source Eigen library, supporting x86, x86 64, ARM (NEON), and PowerPC architectures.[1] (Note: as of Eigen 3.0.3, the BLAS interface is not built by default and the documentation refers to it as "a work in progress which is far to be ready for use".)
ESSL
IBM's Engineering and Scientific Subroutine Library, supporting the PowerPC architecture under AIX and Linux.[27]
GotoBLAS
Kazushige Goto's BSD-licensed implementation of BLAS, tuned in particular for Intel Nehalem/Atom, VIA Nanoprocessor, AMD Opteron.[28]
HP MLIB
HP's Math library supporting IA-64, PA-RISC, x86 and Opteron architecture under HPUX and Linux.
Intel MKL
The Intel Math Kernel Library, supporting x86 32-bits and 64-bits, available free from Intel.[3] Includes optimizations for Intel Pentium, Core and Intel Xeon CPUs and Intel Xeon Phi; support for Linux, Windows and Mac OS X.[29]
MathKeisan
NEC's math library, supporting NEC SX architecture under SUPER-UX, and Itanium under Linux[30]
Netlib BLAS
The official reference implementation on Netlib, written in Fortran 77.[31]
Netlib CBLAS
Reference C interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C.[32]
OpenBLAS
Optimized BLAS based on GotoBLAS hosted at GitHub[33], supporting x86, x86-64, MIPS, ARM, and ARM64 processors.[34]
PDLIB/SX
NEC's Public Domain Mathematical Library for the NEC SX-4 system.[35]
SCSL
SGI's Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's Irix workstations.[36]
Sun Performance Library
Optimized BLAS and LAPACK for SPARC, Core and AMD64 architectures under Solaris 8, 9, and 10 as well as Linux.[37]

Similar libraries but not compatible with BLAS

Armadillo
Armadillo is a C++ linear algebra library aiming towards a good balance between speed and ease of use. It employs template classes, and has optional links to BLAS/ATLAS and LAPACK. It is sponsored by NICTA (in Australia) and is licensed under a free license.[38]
clMath
clMath, formerly AMD Accelerated Parallel Processing Math Libraries (APPML), is an open-source project that contains FFT and 3 Levels BLAS functions written in OpenCL. Designed to run on AMD GPUs supporting OpenCL also work on CPUs to facilitate multicore programming and debugging.[39]
CUDA SDK
The NVIDIA CUDA SDK includes BLAS functionality for writing C programs that runs on GeForce 8 Series or newer graphics cards.
Eigen
The Eigen template library provides an easy to use highly generic C++ template interface to matrix/vector operations and related algorithms like solving algorithms, decompositions etc. It uses vector capabilities and is optimized for both fixed size and dynamic sized and sparse matrices.[40]
Elemental
Elemental is a open source software for distributed-memory dense and sparse-direct linear algebra and optimization.[41]
GSL
The GNU Scientific Library Contains a multi-platform implementation in C which is distributed under the GNU General Public License.
HASEM
is a C++ template library, being able to solve linear equations and to compute eigenvalues. It is licensed under BSD License.[42]
LAMA
The Library for Accelerated Math Applications (LAMA) is a C++ template library for writing numerical solvers targeting various hardwares (e.g. GPUs through CUDA or OpenCL) on distributed memory systems, hiding the hardware specific programming from the program developer
Libflame
FLAME project implementation of dense linear algebra library[43]
MAGMA
Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPU graphics cards.[44]
MTL4
The Matrix Template Library version 4 is a generic C++ template library providing sparse and dense BLAS functionality. MTL4 establishes an intuitive interface (similar to MATLAB) and broad applicability thanks to Generic programming.
PLASMA
The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with a Directed acyclic graph.[45]
uBLAS
A generic C++ template class library providing BLAS functionality. Part of the Boost library. It provides bindings to many hardware-accelerated libraries in a unifying notation. Moreover, uBLAS focuses on correctness of the algorithms using advanced C++ features.[46]

Sparse BLAS

Several extensions to BLAS for handling sparse matrices have been suggested over the course of the library's history; a small set of sparse matrix kernel routines were finally standardized in 2002.[47]

See also

References

  1. ^ a b *Lawson, C. L.; Hanson, R. J.; Kincaid, D.; Krogh, F. T. (1979). "Basic Linear Algebra Subprograms for FORTRAN usage". ACM Trans. Math. Software. 5: 308–323. doi:10.1145/355841.355847. Algorithm 539. {{cite journal}}: Invalid |ref=harv (help)
  2. ^ "ACML – AMD Core Math Library". AMD. 2013. Retrieved 26 August 2015.
  3. ^ a b "No Cost Options for Intel Math Kernel Library (MKL), Support yourself, Royalty-Free". Intel. 2015. Retrieved 31 August 2015.
  4. ^ "Intel® Math Kernel Library (Intel® MKL)". Intel. 2015. Retrieved 25 August 2015.
  5. ^ "Optimization Notice". Intel. 2012. Retrieved 10 April 2013.
  6. ^ Douglas Quinney (2003). "So what's new in Mathematica 5.0?" (PDF). MSOR Connections. 3 (4). The Higher Education Academy.
  7. ^ Cleve Moler (2000). "MATLAB Incorporates LAPACK". MathWorks. Retrieved 26 October 2013.
  8. ^ "The NumPy array: a structure for efficient numerical computation". Computing in Science and Engineering. IEEE. 2011. {{cite journal}}: Unknown parameter |authors= ignored (help)
  9. ^ Boisvert, Ronald F. (2000). "Mathematical software: past, present, and future". Mathematics and Computers in Simulation. 54 (4–5): 227–241. arXiv:cs/0004004. doi:10.1016/S0378-4754(00)00185-3.
  10. ^ Even the SSP (which appeared around 1966) had some basic routines such as RADD (add rows), CADD (add columns), SRMA (scale row and add to another row), and RINT (row interchange). These routines apparently were not used as kernel operations to implement other routines such as matrix inversion. See IBM (1970), System/360 Scientific Subroutine Package, Version III, Programmer's Manual (5th ed.), International Business Machines, GH20-0205-4.
  11. ^ BLAST Forum 2001, p. 1.
  12. ^ Lawson et al. 1979.
  13. ^ BLAST Forum 2001, pp. 1–2.
  14. ^ BLAST Forum 2001, p. 2.
  15. ^ a b Dongarra, Jack J.; Du Croz, Jeremy; Hammarling, Sven; Duff, Iain S. (1990). "A set of level 3 basic linear algebra subprograms". ACM Transactions on Mathematical Software. 16 (1): 1–17. doi:10.1145/77626.79170. ISSN 0098-3500.
  16. ^ a b Dongarra, Jack J.; Du Croz, Jeremy; Hammarling, Sven; Hanson, Richard J. (1988). "An extended set of FORTRAN Basic Linear Algebra Subprograms". ACM Trans. Math. Soft. 14: 1–17. doi:10.1145/42288.42291.
  17. ^ Goto, Kazushige; van de Geijn, Robert (2008). "High-performance implementation of the level-3 BLAS" (PDF). ACM Transactions on Mathematical Software. 35 (1).
  18. ^ Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0-8018-5414-9
  19. ^ http://developer.apple.com/library/mac/#releasenotes/Performance/RN-vecLib/
  20. ^ http://developer.apple.com/library/ios/#documentation/Accelerate/Reference/AccelerateFWRef/
  21. ^ http://developer.amd.com/acml.aspx
  22. ^ http://ampblas.codeplex.com/
  23. ^ http://math-atlas.sourceforge.net/
  24. ^ http://code.google.com/p/blis/
  25. ^ http://developer.nvidia.com/cublas
  26. ^ https://github.com/clMathLibraries/clBLAS
  27. ^ http://publib.boulder.ibm.com/infocenter/clresctr/index.jsp?topic=/com.ibm.cluster.essl.doc/esslbooks.html
  28. ^ http://www.tacc.utexas.edu/tacc-projects/gotoblas2/
  29. ^ http://software.intel.com/en-us/intel-mkl/
  30. ^ http://www.mathkeisan.com/
  31. ^ http://www.netlib.org/blas/
  32. ^ http://www.netlib.org/blas
  33. ^ xianyi/OpenBLAS - GitHub
  34. ^ OpenBLAS : An optimized BLAS library
  35. ^ http://www.nec.co.jp/hpc/mediator/sxm_e/software/61.html
  36. ^ http://www.sgi.com/products/software/scsl.html
  37. ^ http://www.oracle.com/technetwork/server-storage/solarisstudio/overview/index.html
  38. ^ http://arma.sourceforge.net/
  39. ^ http://developer.amd.com/tools/heterogeneous-computing/amd-accelerated-parallel-processing-math-libraries/
  40. ^ http://eigen.tuxfamily.org
  41. ^ Elemental: distributed-memory dense and sparse-direct linear algebra and optimization — Elemental
  42. ^ http://sourceforge.net/projects/hasem/
  43. ^ http://z.cs.utexas.edu/wiki/flame.wiki/FrontPage
  44. ^ http://icl.eecs.utk.edu/magma/
  45. ^ http://icl.eecs.utk.edu/
  46. ^ http://www.boost.org/doc/libs/release/libs/numeric/ublas/doc/index.htm
  47. ^ Duff, Iain S.; Heroux, Michael A.; Pozo, Roldan (2002). "An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum". TOMS. 28 (2): 239–267.
  • BLAST Forum (21 August 2001), Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard, Knoxville, TN: University of Tennessee
  • Dodson, D. S.; Grimes, R. G. (1982), "Remark on algorithm 539: Basic Linear Algebra Subprograms for Fortran usage", ACM Trans. Math. Software, 8: 403–404, doi:10.1145/356012.356020
  • Dodson, D. S. (1983), "Corrigendum: Remark on "Algorithm 539: Basic Linear Algebra Subroutines for FORTRAN usage"", ACM Trans. Math. Software, 9: 140, doi:10.1145/356022.356032
  • J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 14 (1988), pp. 18–32.
  • J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 1–17.
  • J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Soft., 16 (1990), pp. 18–28.
New BLAS
  • L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, R. C. Whaley, An Updated Set of Basic Linear Algebra Subprograms (BLAS), ACM Trans. Math. Soft., 28-2 (2002), pp. 135–151.
  • J. Dongarra, Basic Linear Algebra Subprograms Technical Forum Standard, International Journal of High Performance Applications and Supercomputing, 16(1) (2002), pp. 1–111, and International Journal of High Performance Applications and Supercomputing, 16(2) (2002), pp. 115–199.
  • BLAS homepage on Netlib.org
  • BLAS FAQ
  • BLAS Quick Reference Guide from LAPACK Users' Guide
  • Lawson Oral History One of the original authors of the BLAS discusses its creation in an oral history interview. Charles L. Lawson Oral history interview by Thomas Haigh, 6 and 7 November 2004, San Clemente, California. Society for Industrial and Applied Mathematics, Philadelphia, PA.
  • Dongarra Oral History In an oral history interview, Jack Dongarra explores the early relationship of BLAS to LINPACK, the creation of higher level BLAS versions for new architectures, and his later work on the ATLAS system to automatically optimize BLAS for particular machines. Jack Dongarra, Oral history interview by Thomas Haigh, 26 April 2005, University of Tennessee, Knoxville TN. Society for Industrial and Applied Mathematics, Philadelphia, PA
  • How does BLAS get such extreme performance? Ten naive 1000×1000 matrix multiplications (1010 floating point multiply-adds) takes 15.77 seconds on 2.6 GHz processor; BLAS implementation takes 1.32 seconds.
  • An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum [2]