Jump to content

Hierarchical matrix: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Citation bot (talk | contribs)
m Removed URL that duplicated unique identifier. | You can use this bot yourself. Report bugs here.| Activated by User:Corvusphalanx
shorten short description
 
(11 intermediate revisions by 7 users not shown)
Line 1: Line 1:
{{Short description|Approximation method}}
In [[numerical mathematics]], '''hierarchical matrices (H-matrices)'''

<ref name="HA99">{{cite journal|last=Hackbusch|first=Wolfgang|date=1999|title=A sparse matrix arithmetic based on H-matrices. Part I: Introduction to H-matrices|journal=Computing|volume=62|issue=2|pages=89&ndash;108|doi=10.1007/s006070050015}}</ref>
<ref name="GRHA02">{{cite journal|last=Grasedyck|first=Lars|last2=Hackbusch|first2=Wolfgang|date=2003|title=Construction and arithmetics of H-matrices|journal=Computing|volume=70|issue=4|pages=295&ndash;334|doi=10.1007/s00607-003-0019-1}}</ref>
In [[numerical mathematics]], '''hierarchical matrices (H-matrices)'''<ref name="HA99">{{cite journal|last=Hackbusch|first=Wolfgang|date=1999|title=A sparse matrix arithmetic based on H-matrices. Part I: Introduction to H-matrices|journal=Computing|volume=62|issue=2|pages=89&ndash;108|doi=10.1007/s006070050015|s2cid=24294140 }}</ref><ref name="GRHA02">{{cite journal|last1=Grasedyck|first1=Lars|last2=Hackbusch|first2=Wolfgang|date=2003|title=Construction and arithmetics of H-matrices|journal=Computing|volume=70|issue=4|pages=295&ndash;334|doi=10.1007/s00607-003-0019-1}}</ref><ref name="HA09">{{cite book|last=Hackbusch|first=Wolfgang|date=2015|title=Hierarchical matrices: Algorithms and Analysis|volume=49|publisher=Springer|doi=10.1007/978-3-662-47324-5|series=Springer Series in Computational Mathematics|isbn=978-3-662-47323-8}}</ref>
are used as data-sparse approximations of non-sparse matrices. While a [[sparse matrix]] of dimension <math>n</math> can be represented efficiently in <math>O(n)</math> units of storage by storing only its non-zero entries, a non-sparse matrix would require <math>O(n^2)</math> units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only <math>O(n k\,\log(n))</math> units of storage, where <math>k</math> is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations,<ref name="MB08">{{cite book|last=Bebendorf|first=Mario|date=2008|title=Hierarchical matrices: A means to efficiently solve elliptic boundary value problems|publisher=Springer}}</ref><ref name="HAKH00">{{cite journal|last1=Hackbusch|first1=Wolfgang|last2=Khoromskij|first2=Boris N.|date=2000|title=A sparse H-Matrix Arithmetic. Part II: Application to Multi-Dimensional Problems|journal=Computing|volume=64|pages=21&ndash;47|doi=10.1007/PL00021408 }}</ref><ref name="MB00">{{cite journal|last=Bebendorf|first=Mario|title=Approximation of boundary element matrices|date=2000|journal=Numer. Math.|volume=86|issue=4|pages=565–589|doi=10.1007/pl00005410|s2cid=206858339 }}</ref><ref name="BERJ03">{{cite journal|last1=Bebendorf|first1=Mario|last2=Rjasanow|first2=Sergej|date=2003|title=Adaptive low-rank approximation of collocation matrices|journal=Computing|volume=70|pages=1&ndash;24|doi=10.1007/s00607-002-1469-6|citeseerx=10.1.1.133.182|s2cid=16501661 }}</ref><ref name="BOGR05">{{cite journal|last1=Börm|first1=Steffen|last2=Grasedyck|first2=Lars|date=2005|title=Hybrid cross approximation of integral operators|journal=Numer. Math.|volume=101|issue=2|pages=221&ndash;249|doi=10.1007/s00211-005-0618-1|citeseerx=10.1.1.330.8950|s2cid=263882011 }}</ref><ref name="BOCH16">{{cite journal|last1=Börm|first1=Steffen|last2=Christophersen|first2=Sven|date=2016|title=Approximation of integral operators by Green quadrature and nested cross approximation|journal=Numer. Math.|volume=133|issue=3|pages=409&ndash;442|doi=10.1007/s00211-015-0757-y|arxiv=1404.2234|s2cid=253745725 }}</ref>
<ref name="HA09">{{cite book|last=Hackbusch|first=Wolfgang|date=2015|title=Hierarchical matrices: Algorithms and Analysis|volume=49|publisher=Springer|doi=10.1007/978-3-662-47324-5|series=Springer Series in Computational Mathematics|isbn=978-3-662-47323-8}}</ref>
preconditioning the resulting systems of linear equations,<ref name="FAMEPR16">{{cite journal|last1=Faustmann|first1=Markus|last2=Melenk|first2=J.&nbsp;Markus|last3=Praetorius|first3=Dirk|date=2016|title=Existence of H-matrix approximants to the inverses of BEM matrices: The simple-layer operator|journal=Math. Comp.|volume=85|issue=297|pages=119&ndash;152|doi=10.1090/mcom/2990|arxiv=1311.5028|s2cid=10706786 }}</ref>
are used as data-sparse approximations of non-sparse matrices.
or solving elliptic partial differential equations,<ref name="BEHA03">{{cite journal|last1=Bebendorf|first1=Mario|last2=Hackbusch|first2=Wolfgang|date=2003|title=Existence of H-matrix approximants to the inverse FE-matrix of elliptic operators with <math>L^\infty</math>-coefficients|journal=Numer. Math.|volume=95|pages=1&ndash;28|doi=10.1007/s00211-002-0445-6|s2cid=263876883 }}</ref><ref name="BO10">{{cite journal|last=Börm|first=Steffen|date=2010|title=Approximation of solution operators of elliptic partial differential equations by H- and H<sup>2</sup>-matrices|journal=Numer. Math.|volume=115|issue=2|pages=165&ndash;193|doi=10.1007/s00211-009-0278-7|s2cid=7737211 }}</ref><ref name ="FAMEPR13">{{cite journal|last1=Faustmann|first1=Markus|last2=Melenk|first2=J.&nbsp;Markus|last3=Praetorius|first3=Dirk|date=2015|title=H-matrix approximability of the inverses of FEM matrices|journal=Numer. Math.|volume=131|issue=4|pages=615&ndash;642|doi=10.1007/s00211-015-0706-9|arxiv=1308.0499|s2cid=2619823 }}</ref><ref name ="SWX16">{{cite journal|last1=Shen|first1=Jie|last2=Wang|first2=Yingwei|last3=Xia|first3=Jianlin|date=2016|title=Fast structured direct spectral methods for differential equations with variable coefficients|journal= SIAM Journal on Scientific Computing|volume=38|issue=1|pages=A28&ndash;A54|doi=10.1137/140986815}}</ref> a rank proportional to <math>\log(1/\epsilon)^\gamma</math> with a small constant <math>\gamma</math> is sufficient to ensure an accuracy of <math>\epsilon</math>. Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in <math>O(n k^\alpha\,\log(n)^\beta)</math> operations, where <math>\alpha,\beta\in\{1,2,3\}.</math><ref name="GRHA02"/>
While a [[sparse matrix]] of dimension <math>n</math> can be represented efficiently in <math>O(n)</math> units of storage
by storing only its non-zero entries, a non-sparse matrix would require <math>O(n^2)</math> units of storage, and using this type
of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time.
Hierarchical matrices provide an approximation requiring only <math>O(n k\,\log(n))</math> units of storage, where <math>k</math> is a
parameter controlling the accuracy of the approximation.
In typical applications, e.g., when discretizing integral equations
<ref name="MB08">{{cite book|last=Bebendorf|first=Mario|date=2008|title=Hierarchical matrices: A means to efficiently solve elliptic boundary value problems|publisher=Springer}}</ref>
<ref name="HAKH00">{{cite journal|last=Hackbusch|first=Wolfgang|last2=Khoromskij|first2=Boris N.|date=2000|title=A sparse H-Matrix Arithmetic. Part II: Application to Multi-Dimensional Problems|journal=Computing|volume=64|pages=21&ndash;47}}</ref>
<ref name="MB00">{{cite journal|last=Bebendorf|first=Mario|title=Approximation of boundary element matrices|date=2000|journal=Numer. Math.|volume=86|issue=4|pages=565–589|doi=10.1007/pl00005410}}</ref>
<ref name="BERJ03">{{cite journal|last=Bebendorf|first=Mario|last2=Rjasanow|first2=Sergej|date=2003|title=Adaptive low-rank approximation of collocation matrices|journal=Computing|volume=70|pages=1&ndash;24|doi=10.1007/s00607-002-1469-6|citeseerx=10.1.1.133.182}}</ref>
<ref name="BOGR05">{{cite journal|last=Börm|first=Steffen|last2=Grasedyck|first2=Lars|date=2005|title=Hybrid cross approximation of integral operators|journal=Numer. Math.|volume=101|issue=2|pages=221&ndash;249|doi=10.1007/s00211-005-0618-1|citeseerx=10.1.1.330.8950}}</ref>
,<ref name="BOCH16">{{cite journal|last=Börm|first=Steffen|last2=Christophersen|first2=Sven|date=2016|title=Approximation of integral operators by Green quadrature and nested cross approximation|journal=Numer. Math.|volume=133|issue=3|pages=409&ndash;442|doi=10.1007/s00211-015-0757-y|arxiv=1404.2234}}</ref>
preconditioning the resulting systems of linear equations
,<ref name="FAMEPR16">{{cite journal|last=Faustmann|first=Markus|last2=Melenk|first2=J.&nbsp;Markus|last3=Praetorius|first3=Dirk|date=2016|title=Existence of H-matrix approximants to the inverses of BEM matrices: The simple-layer operator|journal=Math. Comp.|volume=85|issue=297|pages=119&ndash;152|doi=10.1090/mcom/2990|arxiv=1311.5028}}</ref>
or solving elliptic partial differential equations
<ref name="BEHA03">{{cite journal|last=Bebendorf|first=Mario|last2=Hackbusch|first2=Wolfgang|date=2003|title=Existence of H-matrix approximants to the inverse FE-matrix of elliptic operators with <math>L^\infty</math>-coefficients|journal=Numer. Math.|volume=95|pages=1&ndash;28|doi=10.1007/s00211-002-0445-6}}</ref>
<ref name="BO10">{{cite journal|last=Börm|first=Steffen|date=2010|title=Approximation of solution operators of elliptic partial differential equations by H- and H<sup>2</sup>-matrices|journal=Numer. Math.|volume=115|issue=2|pages=165&ndash;193|doi=10.1007/s00211-009-0278-7}}</ref>
,<ref name ="FAMEPR13">{{cite journal|last=Faustmann|first=Markus|last2=Melenk|first2=J.&nbsp;Markus|last3=Praetorius|first3=Dirk|date=2015|title=H-matrix approximability of the inverses of FEM matrices|journal=Numer. Math.|volume=131|issue=4|pages=615&ndash;642|doi=10.1007/s00211-015-0706-9|arxiv=1308.0499}}</ref>
a rank proportional to <math>\log(1/\epsilon)^\gamma</math> with a small constant <math>\gamma</math> is sufficient to ensure an
accuracy of <math>\epsilon</math>.
Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage:
the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated
in <math>O(n k^\alpha\,\log(n)^\beta)</math> operations, where <math>\alpha,\beta\in\{1,2,3\}.</math><ref name="GRHA02"/>


== Basic idea ==
== Basic idea ==
Line 32: Line 10:
let <math>I,J</math> be index sets, and let <math>G\in{\mathbb R}^{I\times J}</math> denote the matrix we have to approximate.
let <math>I,J</math> be index sets, and let <math>G\in{\mathbb R}^{I\times J}</math> denote the matrix we have to approximate.
In many applications (see above), we can find subsets <math>t\subseteq I,s\subseteq J</math> such that <math>G|_{t\times s}</math>
In many applications (see above), we can find subsets <math>t\subseteq I,s\subseteq J</math> such that <math>G|_{t\times s}</math>
can be approximated by a rank-<math>k</math> matrix.
can be approximated by a rank-<math>k</math> matrix. This approximation can be represented in factorized form <math>G|_{t\times s}\approx A B^*</math> with factors
This approximation can be represented in factorized form <math>G|_{t\times s}\approx A B^*</math> with factors
<math>A\in{\mathbb R}^{t\times k},B\in{\mathbb R}^{s\times k}</math>.
<math>A\in{\mathbb R}^{t\times k},B\in{\mathbb R}^{s\times k}</math>.
While the standard representation of the matrix <math>G|_{t\times s}</math> requires <math>O((\#t)(\#s))</math> units of storage,
While the standard representation of the matrix <math>G|_{t\times s}</math> requires <math>O((\#t)(\#s))</math> units of storage,
the factorized representation requires only <math>O(k(\#t+\#s))</math> units.
the factorized representation requires only <math>O(k(\#t+\#s))</math> units. If <math>k</math> is not too large, the storage requirements are reduced significantly.
If <math>k</math> is not too large, the storage requirements are reduced significantly.


In order to approximate the entire matrix <math>G</math>, it is split into a family of submatrices.
In order to approximate the entire matrix <math>G</math>, it is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation in order to improve efficiency.
Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation
in order to improve the efficiency.


Low-rank matrices are closely related to degenerate expansions used in [[panel clustering]] and the [[fast multipole method]]
Low-rank matrices are closely related to degenerate expansions used in [[panel clustering]] and the [[fast multipole method]]
to approximate integral operators.
to approximate integral operators. In this sense, hierarchical matrices can be considered the algebraic counterparts of these techniques.
In this sense, hierarchical matrices can be considered the algebraic counterparts of these techniques.


== Application to integral operators ==
== Application to integral operators ==
Hierarchical matrices are successfully used to treat integral equations, e.g., the single and double layer potential operators
Hierarchical matrices are successfully used to treat integral equations, e.g., the single and double layer potential operators
appearing in the [[boundary element method]].
appearing in the [[boundary element method]]. A typical operator has the form
A typical operator has the form


: <math>{\mathcal G}[u](x) = \int_\Omega \kappa(x,y) u(y) \,dy.</math>
: <math>{\mathcal G}[u](x) = \int_\Omega \kappa(x,y) u(y) \,dy.</math>
Line 84: Line 56:
would also allow us to split the double integral into two single integrals and thus arrive at a similar factorized low-rank matrix.
would also allow us to split the double integral into two single integrals and thus arrive at a similar factorized low-rank matrix.


Of particular interest are cross approximation techniques<ref name="MB00"/><ref name="BERJ03"/><ref name="TY00">{{cite journal|last=Tyrtyshnikov|first=Eugene|date=2000|title=Incomplete cross approximation in the mosaic-skeleton method|journal=Computing|volume=64|issue=4|pages=367&ndash;380|doi=10.1007/s006070070031|citeseerx=10.1.1.100.6153|s2cid=15850058 }}</ref>
Of particular interest are cross approximation techniques
<ref name="MB00"/>
<ref name="BERJ03"/>
<ref name="TY00">{{cite journal|last=Tyrtyshnikov|first=Eugene|date=2000|title=Incomplete cross approximation in the mosaic-skeleton method|journal=Computing|volume=64|issue=4|pages=367&ndash;380|doi=10.1007/s006070070031|citeseerx=10.1.1.100.6153}}</ref>
that use only the entries of the original matrix <math>G</math> to construct a [[low rank approximation|low-rank approximation]].
that use only the entries of the original matrix <math>G</math> to construct a [[low rank approximation|low-rank approximation]].


Line 93: Line 62:
Since the solution operator of an elliptic partial differential equation can be expressed as an integral operator involving
Since the solution operator of an elliptic partial differential equation can be expressed as an integral operator involving
[[Green's function]], it is not surprising that the inverse of the stiffness matrix arising from the [[finite element method]]
[[Green's function]], it is not surprising that the inverse of the stiffness matrix arising from the [[finite element method]]
can be approximated by a hierarchical matrix.
and [[spectral method]] can be approximated by a hierarchical matrix.


Green's function depends on the shape of the computational domain, therefore it is usually not known.
Green's function depends on the shape of the computational domain, therefore it is usually not known. Nevertheless, approximate arithmetic operations can be employed to compute an approximate inverse without knowing the
Nevertheless, approximate arithmetic operations can be employed to compute an approximate inverse without knowing the
function explicitly.
function explicitly.


Surprisingly, it is possible to prove<ref name="BEHA03"/><ref name="BO10"/><ref name="FAMEPR13"/> that the inverse can be approximated even if
Surprisingly, it is possible to prove<ref name="BEHA03"/><ref name="BO10"/><ref name="FAMEPR13"/><ref name="SWX16"/> that the inverse can be approximated even if the differential operator involves non-smooth coefficients and Green's function is therefore not smooth.
the differential operator involves non-smooth coefficients and Green's function is therefore not smooth.


== Arithmetic operations ==
== Arithmetic operations ==
The most important innovation of the hierarchical matrix method is the development of efficient algorithms for performing
The most important innovation of the hierarchical matrix method is the development of efficient algorithms for performing (approximate) matrix arithmetic operations on non-sparse matrices, e.g., to compute approximate inverses, [[LU decomposition]]s and solutions to matrix equations.
(approximate) matrix arithmetic operations on non-sparse matrices, e.g., to compute approximate inverses, [[LU decomposition]]s
and solutions to matrix equations.


The central algorithm is the efficient matrix-matrix multiplication, i.e., the computation of <math>Z = Z + \alpha X Y</math>
The central algorithm is the efficient matrix-matrix multiplication, i.e., the computation of <math>Z = Z + \alpha X Y</math>
for hierarchical matrices <math>X,Y,Z</math> and a scalar factor <math>\alpha</math>.
for hierarchical matrices <math>X,Y,Z</math> and a scalar factor <math>\alpha</math>.
The algorithm requires the submatrices of the hierarchical matrices to be organized in a block tree structure and takes
The algorithm requires the submatrices of the hierarchical matrices to be organized in a block tree structure and takes advantage of the properties of factorized low-rank matrices to compute the updated <math>Z</math> in
advantage of the properties of factorized low-rank matrices to compute the updated <math>Z</math> in
<math>O(n k^2\,\log(n)^2)</math> operations.
<math>O(n k^2\,\log(n)^2)</math> operations.


Taking advantage of the block structure, the inverse can be computed by using recursion to compute inverses and [[Schur complement]]s of diagonal blocks and combining both using the matrix-matrix multiplication. In a similar way, the [[LU decomposition]]<ref name="BE07">{{cite journal|last=Bebendorf|first=Mario|date=2007|title=Why finite element discretizations can be factored by triangular hierarchical matrices|journal=SIAM J. Numer. Anal.|volume=45|issue=4|pages=1472&ndash;1494|doi=10.1137/060669747}}</ref><ref name="GRKRBO09">{{cite journal|last1=Grasedyck|first1=Lars|last2=Kriemann|first2=Ronald|last3=Le&nbsp;Borne|first3=Sabine|date=2009|title=Domain decomposition based H-LU preconditioning|journal=Numer. Math.|volume=112|issue=4|pages=565&ndash;600|doi=10.1007/s00211-009-0218-6|doi-access=free}}</ref>
Taking advantage of the block structure, the inverse can be computed by using recursion to compute inverses and
[[Schur complement]]s of diagonal blocks and combining both using the matrix-matrix multiplication.
In a similar way, the [[LU decomposition]]
<ref name="BE07">{{cite journal|last=Bebendorf|first=Mario|date=2007|title=Why finite element discretizations can be factored by triangular hierarchical matrices|journal=SIAM J. Numer. Anal.|volume=45|issue=4|pages=1472&ndash;1494|doi=10.1137/060669747}}</ref>
<ref name="GRKRBO09">{{cite journal|last=Grasedyck|first=Lars|last2=Kriemann|first2=Ronald|last3=Le&nbsp;Borne|first3=Sabine|date=2009|title=Domain decomposition based H-LU preconditioning|journal=Numer. Math.|volume=112|issue=4|pages=565&ndash;600|doi=10.1007/s00211-009-0218-6}}</ref>
can be constructed using only recursion and multiplication.
can be constructed using only recursion and multiplication.
Both operations also require <math>O(n k^2\,\log(n)^2)</math> operations.
Both operations also require <math>O(n k^2\,\log(n)^2)</math> operations.
Line 123: Line 83:
== H<sup>2</sup>-matrices ==
== H<sup>2</sup>-matrices ==
In order to treat very large problems, the structure of hierarchical matrices can be improved:
In order to treat very large problems, the structure of hierarchical matrices can be improved:
H<sup>2</sup>-matrices<ref name="HAKHSA02">{{cite book|last1=Hackbusch|first1=Wolfgang|last2=Khoromskij|first2=Boris&nbsp;N.|last3=Sauter|first3=Stefan|title=Lectures on Applied Mathematics |chapter=On H 2-Matrices |date=2002|pages=9&ndash;29|doi=10.1007/978-3-642-59709-1_2|isbn=978-3-642-64094-0}}</ref><ref name="BO10b">{{cite book|last=Börm|first=Steffen|date=2010|title=Efficient Numerical Methods for Non-local Operators: H<sup>2</sup>-Matrix Compression, Algorithms and Analysis|publisher=EMS Tracts in Mathematics|url=http://www.ems-ph.org/books/book.php?proj_nr=125|isbn=9783037190913}}</ref>
H<sup>2</sup>-matrices
replace the general low-rank structure of the blocks by a hierarchical representation closely related to the [[fast multipole method]] in order to reduce the storage complexity to <math>O(n k)</math>.
<ref name="HAKHSA02">{{cite book|last=Hackbusch|first=Wolfgang|last2=Khoromskij|first2=Boris&nbsp;N.|last3=Sauter|first3=Stefan|date=2002|title=On H<sup>2</sup>-matrices|journal=Lectures on Applied Mathematics|pages=9&ndash;29|doi=10.1007/978-3-642-59709-1_2|isbn=978-3-642-64094-0}}</ref>
<ref name="BO10b">{{cite book|last=Börm|first=Steffen|date=2010|title=Efficient Numerical Methods for Non-local Operators: H<sup>2</sup>-Matrix Compression, Algorithms and Analysis|publisher=EMS Tracts in Mathematics|url=http://www.ems-ph.org/books/book.php?proj_nr=125|isbn=9783037190913}}</ref>
replace the general low-rank structure of the blocks by a hierarchical representation closely related to the
[[fast multipole method]] in order to reduce the storage complexity to <math>O(n k)</math>.


In the context of boundary integral operators, replacing the fixed rank <math>k</math> by block-dependent ranks leads to approximations that preserve the rate of convergence of the underlying boundary element method at a complexity of <math>O(n).</math><ref name="SA00">{{cite journal|last=Sauter|first=Stefan|date=2000|title=Variable order panel clustering|journal=Computing|volume=64|issue=3|pages=223&ndash;261|doi=10.1007/s006070050045|s2cid=36813444 }}</ref><ref name="BOSA05">{{cite journal|last1=Börm|first1=Steffen|last2=Sauter|first2=Stefan|date=2005|title=BEM with linear complexity for the classical boundary integral operators|journal=Math. Comp.|volume=74|issue=251|pages=1139&ndash;1177|doi=10.1090/s0025-5718-04-01733-8|doi-access=free}}</ref>
In the context of boundary integral operators, replacing the fixed rank <math>k</math> by block-dependent ranks
leads to approximations that preserve the rate of convergence of the underlying boundary element method
at a complexity of <math>O(n).</math><ref name="SA00">{{cite journal|last=Sauter|first=Stefan|date=2000|title=Variable order panel clustering|journal=Computing|volume=64|issue=3|pages=223&ndash;261|doi=10.1007/s006070050045}}</ref>
<ref name="BOSA05">{{cite journal|last=Börm|first=Steffen|last2=Sauter|first2=Stefan|date=2005|title=BEM with linear complexity for the classical boundary integral operators|journal=Math. Comp.|volume=74|issue=251|pages=1139&ndash;1177|doi=10.1090/s0025-5718-04-01733-8}}</ref>


Arithmetic operations like multiplication, inversion, and Cholesky or LR factorization of H<sup>2</sup>-matrices
Arithmetic operations like multiplication, inversion, and Cholesky or LR factorization of H<sup>2</sup>-matrices
can be implemented based on two fundamental operations: the matrix-vector multiplication with submatrices
can be implemented based on two fundamental operations: the matrix-vector multiplication with submatrices
and the low-rank update of submatrices. While the matrix-vector multiplication is straightforward, implementing efficient low-rank updates with adaptively optimized cluster bases poses a significant challenge.<ref name="HARE14">{{cite journal|last1=Börm|first1=Steffen|last2=Reimer|first2=Knut|date=2015|title=Efficient arithmetic operations for rank-structured matrices based on hierarchical low-rank updates|journal=Computing and Visualization in Science|volume=16|issue=6|pages=247&ndash;258|arxiv=1402.5056|doi=10.1007/s00791-015-0233-3|s2cid=36931036 }}</ref>
and the low-rank update of submatrices.
While the matrix-vector multiplication is straightforward, implementing efficient low-rank updates with
adaptively optimized cluster bases poses a significant challenge.<ref name="HARE14">{{cite journal|last=Börm|first=Steffen|last2=Reimer|first2=Knut|date=2015|title=Efficient arithmetic operations for rank-structured matrices based on hierarchical low-rank updates|journal=Comp. Vis. Sci.|volume=16|issue=6|pages=247&ndash;258|arxiv=1402.5056|doi=10.1007/s00791-015-0233-3}}</ref>


== Literature ==
== Literature ==
Line 147: Line 99:
[http://www.hlib.org HLib] is a C software library implementing the most important algorithms for hierarchical and <math>{\mathcal H}^2</math>-matrices.
[http://www.hlib.org HLib] is a C software library implementing the most important algorithms for hierarchical and <math>{\mathcal H}^2</math>-matrices.


AHMED is a C++ software library that can be downloaded for educational purposes.
[https://www.wr.uni-bayreuth.de/en/software/ahmed AHMED] is a C++ software library that can be downloaded for educational purposes.


[http://www.hlibpro.com HLIBpro] is an implementation of the core hierarchical matrix algorithms for commercial applications.
[http://www.hlibpro.com HLIBpro] is an implementation of the core hierarchical matrix algorithms for commercial applications.

Latest revision as of 15:06, 22 May 2024

In numerical mathematics, hierarchical matrices (H-matrices)[1][2][3] are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations,[4][5][6][7][8][9] preconditioning the resulting systems of linear equations,[10] or solving elliptic partial differential equations,[11][12][13][14] a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where [2]

Basic idea

[edit]

Hierarchical matrices rely on local low-rank approximations: let be index sets, and let denote the matrix we have to approximate. In many applications (see above), we can find subsets such that can be approximated by a rank- matrix. This approximation can be represented in factorized form with factors . While the standard representation of the matrix requires units of storage, the factorized representation requires only units. If is not too large, the storage requirements are reduced significantly.

In order to approximate the entire matrix , it is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation in order to improve efficiency.

Low-rank matrices are closely related to degenerate expansions used in panel clustering and the fast multipole method to approximate integral operators. In this sense, hierarchical matrices can be considered the algebraic counterparts of these techniques.

Application to integral operators

[edit]

Hierarchical matrices are successfully used to treat integral equations, e.g., the single and double layer potential operators appearing in the boundary element method. A typical operator has the form

The Galerkin method leads to matrix entries of the form

where and are families of finite element basis functions. If the kernel function is sufficiently smooth, we can approximate it by polynomial interpolation to obtain

where is the family of interpolation points and is the corresponding family of Lagrange polynomials. Replacing by yields an approximation

with the coefficients

If we choose and use the same interpolation points for all , we obtain .

Obviously, any other approximation separating the variables and , e.g., the multipole expansion, would also allow us to split the double integral into two single integrals and thus arrive at a similar factorized low-rank matrix.

Of particular interest are cross approximation techniques[6][7][15] that use only the entries of the original matrix to construct a low-rank approximation.

Application to elliptic partial differential equations

[edit]

Since the solution operator of an elliptic partial differential equation can be expressed as an integral operator involving Green's function, it is not surprising that the inverse of the stiffness matrix arising from the finite element method and spectral method can be approximated by a hierarchical matrix.

Green's function depends on the shape of the computational domain, therefore it is usually not known. Nevertheless, approximate arithmetic operations can be employed to compute an approximate inverse without knowing the function explicitly.

Surprisingly, it is possible to prove[11][12][13][14] that the inverse can be approximated even if the differential operator involves non-smooth coefficients and Green's function is therefore not smooth.

Arithmetic operations

[edit]

The most important innovation of the hierarchical matrix method is the development of efficient algorithms for performing (approximate) matrix arithmetic operations on non-sparse matrices, e.g., to compute approximate inverses, LU decompositions and solutions to matrix equations.

The central algorithm is the efficient matrix-matrix multiplication, i.e., the computation of for hierarchical matrices and a scalar factor . The algorithm requires the submatrices of the hierarchical matrices to be organized in a block tree structure and takes advantage of the properties of factorized low-rank matrices to compute the updated in operations.

Taking advantage of the block structure, the inverse can be computed by using recursion to compute inverses and Schur complements of diagonal blocks and combining both using the matrix-matrix multiplication. In a similar way, the LU decomposition[16][17] can be constructed using only recursion and multiplication. Both operations also require operations.

H2-matrices

[edit]

In order to treat very large problems, the structure of hierarchical matrices can be improved: H2-matrices[18][19] replace the general low-rank structure of the blocks by a hierarchical representation closely related to the fast multipole method in order to reduce the storage complexity to .

In the context of boundary integral operators, replacing the fixed rank by block-dependent ranks leads to approximations that preserve the rate of convergence of the underlying boundary element method at a complexity of [20][21]

Arithmetic operations like multiplication, inversion, and Cholesky or LR factorization of H2-matrices can be implemented based on two fundamental operations: the matrix-vector multiplication with submatrices and the low-rank update of submatrices. While the matrix-vector multiplication is straightforward, implementing efficient low-rank updates with adaptively optimized cluster bases poses a significant challenge.[22]

Literature

[edit]
  1. ^ Hackbusch, Wolfgang (1999). "A sparse matrix arithmetic based on H-matrices. Part I: Introduction to H-matrices". Computing. 62 (2): 89–108. doi:10.1007/s006070050015. S2CID 24294140.
  2. ^ a b Grasedyck, Lars; Hackbusch, Wolfgang (2003). "Construction and arithmetics of H-matrices". Computing. 70 (4): 295–334. doi:10.1007/s00607-003-0019-1.
  3. ^ Hackbusch, Wolfgang (2015). Hierarchical matrices: Algorithms and Analysis. Springer Series in Computational Mathematics. Vol. 49. Springer. doi:10.1007/978-3-662-47324-5. ISBN 978-3-662-47323-8.
  4. ^ Bebendorf, Mario (2008). Hierarchical matrices: A means to efficiently solve elliptic boundary value problems. Springer.
  5. ^ Hackbusch, Wolfgang; Khoromskij, Boris N. (2000). "A sparse H-Matrix Arithmetic. Part II: Application to Multi-Dimensional Problems". Computing. 64: 21–47. doi:10.1007/PL00021408.
  6. ^ a b Bebendorf, Mario (2000). "Approximation of boundary element matrices". Numer. Math. 86 (4): 565–589. doi:10.1007/pl00005410. S2CID 206858339.
  7. ^ a b Bebendorf, Mario; Rjasanow, Sergej (2003). "Adaptive low-rank approximation of collocation matrices". Computing. 70: 1–24. CiteSeerX 10.1.1.133.182. doi:10.1007/s00607-002-1469-6. S2CID 16501661.
  8. ^ Börm, Steffen; Grasedyck, Lars (2005). "Hybrid cross approximation of integral operators". Numer. Math. 101 (2): 221–249. CiteSeerX 10.1.1.330.8950. doi:10.1007/s00211-005-0618-1. S2CID 263882011.
  9. ^ Börm, Steffen; Christophersen, Sven (2016). "Approximation of integral operators by Green quadrature and nested cross approximation". Numer. Math. 133 (3): 409–442. arXiv:1404.2234. doi:10.1007/s00211-015-0757-y. S2CID 253745725.
  10. ^ Faustmann, Markus; Melenk, J. Markus; Praetorius, Dirk (2016). "Existence of H-matrix approximants to the inverses of BEM matrices: The simple-layer operator". Math. Comp. 85 (297): 119–152. arXiv:1311.5028. doi:10.1090/mcom/2990. S2CID 10706786.
  11. ^ a b Bebendorf, Mario; Hackbusch, Wolfgang (2003). "Existence of H-matrix approximants to the inverse FE-matrix of elliptic operators with -coefficients". Numer. Math. 95: 1–28. doi:10.1007/s00211-002-0445-6. S2CID 263876883.
  12. ^ a b Börm, Steffen (2010). "Approximation of solution operators of elliptic partial differential equations by H- and H2-matrices". Numer. Math. 115 (2): 165–193. doi:10.1007/s00211-009-0278-7. S2CID 7737211.
  13. ^ a b Faustmann, Markus; Melenk, J. Markus; Praetorius, Dirk (2015). "H-matrix approximability of the inverses of FEM matrices". Numer. Math. 131 (4): 615–642. arXiv:1308.0499. doi:10.1007/s00211-015-0706-9. S2CID 2619823.
  14. ^ a b Shen, Jie; Wang, Yingwei; Xia, Jianlin (2016). "Fast structured direct spectral methods for differential equations with variable coefficients". SIAM Journal on Scientific Computing. 38 (1): A28 – A54. doi:10.1137/140986815.
  15. ^ Tyrtyshnikov, Eugene (2000). "Incomplete cross approximation in the mosaic-skeleton method". Computing. 64 (4): 367–380. CiteSeerX 10.1.1.100.6153. doi:10.1007/s006070070031. S2CID 15850058.
  16. ^ Bebendorf, Mario (2007). "Why finite element discretizations can be factored by triangular hierarchical matrices". SIAM J. Numer. Anal. 45 (4): 1472–1494. doi:10.1137/060669747.
  17. ^ Grasedyck, Lars; Kriemann, Ronald; Le Borne, Sabine (2009). "Domain decomposition based H-LU preconditioning". Numer. Math. 112 (4): 565–600. doi:10.1007/s00211-009-0218-6.
  18. ^ Hackbusch, Wolfgang; Khoromskij, Boris N.; Sauter, Stefan (2002). "On H 2-Matrices". Lectures on Applied Mathematics. pp. 9–29. doi:10.1007/978-3-642-59709-1_2. ISBN 978-3-642-64094-0.
  19. ^ Börm, Steffen (2010). Efficient Numerical Methods for Non-local Operators: H2-Matrix Compression, Algorithms and Analysis. EMS Tracts in Mathematics. ISBN 9783037190913.
  20. ^ Sauter, Stefan (2000). "Variable order panel clustering". Computing. 64 (3): 223–261. doi:10.1007/s006070050045. S2CID 36813444.
  21. ^ Börm, Steffen; Sauter, Stefan (2005). "BEM with linear complexity for the classical boundary integral operators". Math. Comp. 74 (251): 1139–1177. doi:10.1090/s0025-5718-04-01733-8.
  22. ^ Börm, Steffen; Reimer, Knut (2015). "Efficient arithmetic operations for rank-structured matrices based on hierarchical low-rank updates". Computing and Visualization in Science. 16 (6): 247–258. arXiv:1402.5056. doi:10.1007/s00791-015-0233-3. S2CID 36931036.

Software

[edit]

HLib is a C software library implementing the most important algorithms for hierarchical and -matrices.

AHMED is a C++ software library that can be downloaded for educational purposes.

HLIBpro is an implementation of the core hierarchical matrix algorithms for commercial applications.

H2Lib is an open source implementation of hierarchical matrix algorithms intended for research and teaching.

awesome-hierarchical-matrices is a repository containing a list of other H-Matrices implementations.

HierarchicalMatrices.jl is a Julia package implementing hierarchical matrices.