Jump to content

Random matrix: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Berto (talk | contribs)
Gaussian ensembles: added a small animation showking how the eigenvalues of different gaussian ensembles are distributed on the complex plane
 
(93 intermediate revisions by 43 users not shown)
Line 1: Line 1:
{{short description|Matrix-valued random variable}}
{{Short description|Matrix-valued random variable}}
In [[probability theory]] and [[mathematical physics]], a '''random matrix''' is a [[matrix (mathematics)|matrix]]-valued [[random variable]]&mdash;that is, a matrix in which some or all of its entries are [[Sampling (statistics)|sampled]] randomly from a [[probability distribution]]. '''Random matrix theory (RMT)''' is the study of properties of random matrices, often as they become large. RMT provides techniques like [[mean-field theory]], diagrammatic methods, the [[cavity method]], or the [[replica method]] to compute quantities like [[Trace (linear algebra)|traces]], [[Eigenvalues and eigenvectors|spectral densities]], or scalar products between eigenvectors. Many physical phenomena, such as the [[Stationary state|spectrum]] of [[Atomic nucleus|nuclei]] of heavy atoms,<ref name=":0">{{Cite journal |last=Wigner |first=Eugene P. |date=1955 |title=Characteristic Vectors of Bordered Matrices With Infinite Dimensions |url=https://www.jstor.org/stable/1970079 |journal=Annals of Mathematics |volume=62 |issue=3 |pages=548–564 |doi=10.2307/1970079 |jstor=1970079 |issn=0003-486X}}</ref><ref name=":1">{{Cite report |url=https://www.osti.gov/biblio/4319287 |title=Conference on Neutron Physics by Time-Of-Flight Held at Gatlinburg, Tennessee, November 1 and 2, 1956 |editor-last=Block |editor-first=R. C. |editor-last2=Good |editor-first2=W. M. |date=1957-07-01 |publisher=Oak Ridge National Lab.|location=Oak Ridge, Tennessee |type=Report ORNL-2309 |doi=10.2172/4319287 |osti=4319287 |language=English |editor-last3=Harvey |editor-first3=J. A. |editor-last4=Schmitt |editor-first4=H. W. |editor-last5=Trammell |editor-first5=G. T.}}</ref> the [[Thermal conductivity and resistivity|thermal conductivity]] of a [[Solid-state physics|lattice]], or the emergence of [[quantum chaos]],<ref name=":2" /> can be modeled mathematically as problems concerning large, random matrices.
In [[probability theory]] and [[mathematical physics]], a '''random matrix''' is a [[matrix (mathematics)|matrix]]-valued [[random variable]]&mdash;that is, a matrix in which some or all elements are random variables. Many important properties of [[physical system]]s can be represented mathematically as matrix problems. For example, the [[thermal conductivity]] of a [[Lattice model (physics)|lattice]] can be computed from the dynamical matrix of the particle-particle interactions within the lattice.


==Applications==
==Applications==
Line 6: Line 6:
===Physics===
===Physics===


In [[nuclear physics]], random matrices were introduced by [[Eugene Wigner]] to model the nuclei of heavy atoms.<ref name=wigner>{{cite journal|last=Wigner|first=E.|title=Characteristic vectors of bordered matrices with infinite dimensions|journal=Annals of Mathematics |year=1955 |volume=62 |pages=548–564|doi=10.2307/1970079|issue=3|jstor=1970079}}</ref> He postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the [[eigenvalues]] of a random matrix, and should depend only on the symmetry class of the underlying evolution.<ref name=mehta>{{cite book|last=Mehta|first=M.L.|title=Random Matrices|year=2004|publisher=Elsevier/Academic Press|location=Amsterdam|isbn=0-12-088409-7}}</ref> In [[solid-state physics]], random matrices model the behaviour of large disordered [[Hamiltonian (quantum mechanics)|Hamiltonians]] in the [[mean field]] approximation.
In [[nuclear physics]], random matrices were introduced by [[Eugene Wigner]] to model the nuclei of heavy atoms.<ref name=":0" /><ref name=":1" /> Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the [[eigenvalues]] of a random matrix, and should depend only on the symmetry class of the underlying evolution.<ref name="mehta">{{harvnb|Mehta|2004}}</ref> In [[solid-state physics]], random matrices model the behaviour of large disordered [[Hamiltonian (quantum mechanics)|Hamiltonians]] in the [[Mean field approximation|mean-field approximation]].


In [[quantum chaos]], the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.<ref>{{cite journal|last=Bohigas|first=O.|last2=Giannoni|first2=M.J.|last3=Schmit|first3=Schmit|title=Characterization of Chaotic Quantum Spectra and Universality of Level Fluctuation Laws|journal=Phys. Rev. Lett.|year=1984|volume=52|issue=1|pages=1–4|doi=10.1103/PhysRevLett.52.1|bibcode=1984PhRvL..52....1B}}</ref>
In [[quantum chaos]], the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.<ref name=":2">{{cite journal| last1=Bohigas|first1=O.| last2=Giannoni|first2=M.J.| last3=Schmit|first3=Schmit| title=Characterization of Chaotic Quantum Spectra and Universality of Level Fluctuation Laws|journal=Phys. Rev. Lett.| year=1984|volume=52|issue=1| pages=1–4| doi=10.1103/PhysRevLett.52.1| bibcode=1984PhRvL..52....1B}}</ref>


In [[quantum optics]], transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the [[boson sampling]] model).<ref>{{cite journal|last1=Aaronson|first1=Scott|last2=Arkhipov|first2=Alex|title=The computational complexity of linear optics|journal=Theory of Computing|date=2013|volume=9|pages=143–252|doi=10.4086/toc.2013.v009a004|doi-access=free}}</ref> Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is [[beam splitter|beam splitters]] and phase shifters).<ref>{{cite journal|last1=Russell|first1=Nicholas|last2=Chakhmakhchyan|first2=Levon|last3=O'Brien|first3=Jeremy|last4=Laing|first4=Anthony|journal=New J. Phys.|title=Direct dialling of Haar random unitary matrices|date=2017|volume=19|issue=3|pages=033007|doi=10.1088/1367-2630/aa60ed|bibcode=2017NJPh...19c3007R|arxiv=1506.06220}}</ref>
In [[quantum optics]], transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the [[boson sampling]] model).<ref>{{cite journal| last1=Aaronson|first1=Scott| last2=Arkhipov|first2=Alex| title=The computational complexity of linear optics |journal=Theory of Computing |date=2013 |volume=9 |pages=143–252 |doi=10.4086/toc.2013.v009a004 |doi-access=free}}</ref> Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is [[beam splitter]]s and phase shifters).<ref>{{cite journal |last1=Russell|first1=Nicholas |last2=Chakhmakhchyan|first2=Levon | last3=O'Brien|first3=Jeremy |last4=Laing|first4=Anthony |journal=New J. Phys. | title=Direct dialling of Haar random unitary matrices| date=2017| volume=19|issue=3| pages=033007| doi=10.1088/1367-2630/aa60ed |bibcode=2017NJPh...19c3007R |arxiv=1506.06220 | s2cid=46915633}}</ref>


Random matrix theory has also found applications to the chiral Dirac operator in [[quantum chromodynamics]],<ref>{{cite journal |vauthors=Verbaarschot JJ, Wettig T |title=Random Matrix Theory and Chiral Symmetry in QCD|journal=Annu. Rev. Nucl. Part. Sci. |volume=50|pages=343–410|year=2000|doi=10.1146/annurev.nucl.50.1.343|arxiv = hep-ph/0003017 |bibcode = 2000ARNPS..50..343V }}</ref> [[quantum gravity]] in two dimensions,<ref>{{cite journal |vauthors=Franchini F, Kravtsov VE |title=Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms |journal=Phys. Rev. Lett. |volume=103 |issue=16 |pages=166401|date=October 2009|pmid=19905710|doi=10.1103/PhysRevLett.103.166401 |bibcode = 2009PhRvL.103p6401F |arxiv = 0905.3533 }}</ref> [[mesoscopic|mesoscopic physics]],<ref>{{cite journal |vauthors=Sánchez D, Büttiker M |title=Magnetic-field asymmetry of nonlinear mesoscopic transport |journal=Phys. Rev. Lett. |volume=93 |issue=10 |pages=106802 |date=September 2004 |pmid=15447435 |doi=10.1103/PhysRevLett.93.106802 |bibcode=2004PhRvL..93j6802S|arxiv = cond-mat/0404387 }}</ref>[[spin-transfer torque]],<ref>{{cite journal |vauthors=Rychkov VS, Borlenghi S, Jaffres H, Fert A, Waintal X |title=Spin torque and waviness in magnetic multilayers: a bridge between Valet-Fert theory and quantum approaches |journal=Phys. Rev. Lett. |volume=103 |issue=6 |pages=066602 |date=August 2009 |pmid=19792592|doi=10.1103/PhysRevLett.103.066602|bibcode=2009PhRvL.103f6602R|arxiv = 0902.4360 }}</ref> the [[fractional quantum Hall effect]],<ref>{{cite journal |author=Callaway DJE |title=Random matrices, fractional statistics, and the quantum Hall effect |journal=Phys. Rev. B |volume=43 |issue=10 |pages=8641–8643 |date=April 1991 |pmid=9996505 |doi=10.1103/PhysRevB.43.8641|bibcode = 1991PhRvB..43.8641C |author-link=David J E Callaway }}</ref> [[Anderson localization]],<ref>{{cite journal |vauthors=Janssen M, Pracz K |title=Correlated random band matrices: localization-delocalization transitions |journal=Phys. Rev. E |volume=61 |issue=6 Pt A |pages=6278–86 |date=June 2000 |pmid=11088301 |doi=10.1103/PhysRevE.61.6278 |arxiv = cond-mat/9911467 |bibcode = 2000PhRvE..61.6278J }}</ref> [[quantum dots]],<ref>{{cite journal |vauthors=Zumbühl DM, Miller JB, Marcus CM, Campman K, Gossard AC |title=Spin-orbit coupling, antilocalization, and parallel magnetic fields in quantum dots |journal=Phys. Rev. Lett. |volume=89 |issue=27 |pages=276803|date=December 2002|pmid=12513231|doi=10.1103/PhysRevLett.89.276803|bibcode=2002PhRvL..89A6803Z|arxiv = cond-mat/0208436 }}</ref> and [[superconductors]]<ref>{{cite journal |author=Bahcall SR |title=Random Matrix Model for Superconductors in a Magnetic Field |journal=Phys. Rev. Lett. |volume=77 |issue=26 |pages=5276–5279 |date=December 1996|pmid=10062760|doi=10.1103/PhysRevLett.77.5276|bibcode=1996PhRvL..77.5276B|arxiv = cond-mat/9611136 }}</ref>
Random matrix theory has also found applications to the chiral Dirac operator in [[quantum chromodynamics]],<ref>{{cite journal |vauthors=Verbaarschot JJ, Wettig T |title=Random Matrix Theory and Chiral Symmetry in QCD|journal=Annu. Rev. Nucl. Part. Sci. |volume=50|pages=343–410 |year=2000 |doi=10.1146/annurev.nucl.50.1.343 |arxiv = hep-ph/0003017 |bibcode = 2000ARNPS..50..343V |s2cid=119470008}}</ref> [[quantum gravity]] in two dimensions,<ref>{{cite journal |vauthors=Franchini F, Kravtsov VE |title=Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms |journal=Phys. Rev. Lett. |volume=103 |issue=16 |pages=166401|date=October 2009 |pmid=19905710|doi=10.1103/PhysRevLett.103.166401 |bibcode = 2009PhRvL.103p6401F |arxiv = 0905.3533 |s2cid=11122957 }}</ref> [[mesoscopic|mesoscopic physics]],<ref>{{cite journal |vauthors=Sánchez D, Büttiker M |title=Magnetic-field asymmetry of nonlinear mesoscopic transport |journal=Phys. Rev. Lett. |volume=93 |issue=10 |pages=106802 |date=September 2004 |pmid=15447435 |doi=10.1103/PhysRevLett.93.106802 |bibcode=2004PhRvL..93j6802S | arxiv = cond-mat/0404387 |s2cid=11686506 }}</ref> [[spin-transfer torque]],<ref>{{cite journal |vauthors=Rychkov VS, Borlenghi S, Jaffres H, Fert A, Waintal X |title=Spin torque and waviness in magnetic multilayers: a bridge between Valet-Fert theory and quantum approaches |journal=Phys. Rev. Lett. |volume=103 |issue=6 |pages=066602 |date=August 2009 |pmid=19792592|doi=10.1103/PhysRevLett.103.066602|bibcode=2009PhRvL.103f6602R|arxiv = 0902.4360 |s2cid=209013 }}</ref> the [[fractional quantum Hall effect]],<ref>{{cite journal |author=Callaway DJE |title=Random matrices, fractional statistics, and the quantum Hall effect |journal=Phys. Rev. B |volume=43 |issue=10 |pages=8641–8643 |date=April 1991 |pmid=9996505 |doi=10.1103/PhysRevB.43.8641 |bibcode = 1991PhRvB..43.8641C |author-link=David J E Callaway }}</ref> [[Anderson localization]],<ref>{{cite journal |vauthors=Janssen M, Pracz K |title=Correlated random band matrices: localization-delocalization transitions |journal=Phys. Rev. E |volume=61 |issue=6 Pt A |pages=6278–86 |date=June 2000 |pmid=11088301 |doi=10.1103/PhysRevE.61.6278 |arxiv = cond-mat/9911467 |bibcode = 2000PhRvE..61.6278J |s2cid=34140447 }}</ref> [[quantum dots]],<ref>{{cite journal |vauthors=Zumbühl DM, Miller JB, Marcus CM, Campman K, Gossard AC |title=Spin-orbit coupling, antilocalization, and parallel magnetic fields in quantum dots |journal=Phys. Rev. Lett. |volume=89 |issue=27 |pages=276803 |date=December 2002 |pmid=12513231 | doi=10.1103/PhysRevLett.89.276803 |bibcode=2002PhRvL..89A6803Z |arxiv = cond-mat/0208436 |s2cid=9344722 }}</ref> and [[superconductors]]<ref>{{cite journal |author=Bahcall SR |title=Random Matrix Model for Superconductors in a Magnetic Field |journal=Phys. Rev. Lett. |volume=77 |issue=26 |pages=5276–5279 |date=December 1996|pmid=10062760|doi=10.1103/PhysRevLett.77.5276|bibcode=1996PhRvL..77.5276B|arxiv = cond-mat/9611136 |s2cid=206326136 }}</ref>


===Mathematical statistics and numerical analysis===
===Mathematical statistics and numerical analysis===


In [[multivariate statistics]], random matrices were introduced by [[John Wishart (statistician)|John Wishart]], who sought to [[estimation of covariance matrices|estimate covariance matrices]] of large samples.<ref name="wishart">{{harvnb|Wishart|1928}}</ref> [[Chernoff bound|Chernoff]]-, [[Bernstein inequalities (probability theory)|Bernstein]]-, and [[Hoeffding's inequality|Hoeffding]]-type inequalities can typically be strengthened when applied to the maximal eigenvalue (i.e. the eigenvalue of largest magnitude) of a finite sum of random [[Hermitian matrix|Hermitian matrices]].<ref>{{cite journal | last=Tropp | first=J.|title=User-Friendly Tail Bounds for Sums of Random Matrices |journal=Foundations of Computational Mathematics |year=2011 |doi=10.1007/s10208-011-9099-z|volume=12 | issue=4| pages=389–434 |arxiv=1004.4389| s2cid=17735965}}</ref> Random matrix theory is used to study the spectral properties of random matrices—such as sample covariance matrices—which is of particular interest in [[high-dimensional statistics]]. Random matrix theory also saw applications in [[Neural network|neuronal networks]]<ref>{{cite journal |last1=Pennington |first1=Jeffrey |last2=Bahri |first2=Yasaman |date=2017 |title=Geometry of Neural Network Loss Surfaces via Random Matrix Theory |journal=ICML'17: Proceedings of the 34th International Conference on Machine Learning |volume=70|s2cid=39515197 }}</ref> and [[deep learning]], with recent work utilizing random matrices to show that hyper-parameter tunings can be cheaply transferred between large neural networks without the need for re-training.<ref>{{cite arXiv |last=Yang |first=Greg |date=2022 |title=Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer |class=cs.LG |eprint=2203.03466v2}}</ref>
In [[multivariate statistics]], random matrices were introduced by [[John Wishart (statistician)|John Wishart]] for statistical analysis of large samples;<ref name=wishart>{{cite journal|last=Wishart|first=J.|title=Generalized product moment distribution in samples|journal=Biometrika|year=1928|volume=20A|issue=1–2|pages=32–52|doi=10.1093/biomet/20a.1-2.32}}</ref> see [[estimation of covariance matrices]].


In [[numerical analysis]], random matrices have been used since the work of [[John von Neumann]] and [[Herman Goldstine]]<ref name="vng">{{harvnb|von Neumann|Goldstine|1947}}</ref> to describe computation errors in operations such as [[matrix multiplication]]. Although random entries are traditional "generic" inputs to an algorithm, the [[concentration of measure]] associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.<ref name="er">{{harvnb|Edelman|Rao|2005}}</ref>
Significant results have been shown that extend the classical scalar [[Chernoff bound|Chernoff]], [[Bernstein inequalities (probability theory)|Bernstein]], and [[Hoeffding's inequality|Hoeffding]] inequalities to the largest eigenvalues of finite sums of random [[Hermitian matrix|Hermitian matrices]].<ref>{{cite journal | last=Tropp | first=J.|title=User-Friendly Tail Bounds for Sums of Random Matrices|journal=Foundations of Computational Mathematics|year=2011|doi=10.1007/s10208-011-9099-z|volume=12| issue=4|pages=389–434|arxiv=1004.4389}}</ref> Corollary results are derived for the maximum singular values of rectangular matrices.

In [[numerical analysis]], random matrices have been used since the work of [[John von Neumann]] and [[Herman Goldstine]]<ref name=vng>{{cite journal|last=von Neumann|first=J.|last2=Goldstine|first2=H.H.|title=Numerical inverting of matrices of high order|journal=Bull. Amer. Math. Soc.|year=1947|volume=53|pages=1021–1099|doi=10.1090/S0002-9904-1947-08909-6|issue=11|doi-access=free}}</ref> to describe computation errors in operations such as [[matrix multiplication]]. See also<ref name=er>{{cite journal|last=Edelman|first=A.|last2=Rao|first2=N.R|title=Random matrix theory|journal=Acta Numerica|year=2005|volume=14|pages=233–297|doi=10.1017/S0962492904000236|bibcode = 2005AcNum..14..233E }}</ref><ref name=js>{{cite journal|last=Shen|first=J.|title=On the singular values of Gaussian random matrices|journal=Linear Alg. Appl.|year=2001|volume=326|pages=1-14|doi=10.1016/S0024-3795(00)00322-0}}</ref> for more recent results.


===Number theory===
===Number theory===


In [[number theory]], the distribution of zeros of the [[Riemann zeta function]] (and other [[L-function]]s) is modelled by the distribution of eigenvalues of certain random matrices.<ref>{{cite journal|last=Keating|first=Jon|title=The Riemann zeta-function and quantum chaology|journal=Proc. Internat. School of Phys. Enrico Fermi|year=1993|volume=CXIX|pages=145–185|doi=10.1016/b978-0-444-81588-0.50008-0|isbn=9780444815880}}</ref> The connection was first discovered by [[Hugh Montgomery (mathematician)|Hugh Montgomery]] and [[Freeman J. Dyson]]. It is connected to the [[Hilbert–Pólya conjecture]].
In [[number theory]], the distribution of zeros of the [[Riemann zeta function]] (and other [[L-function]]s) is modeled by the distribution of eigenvalues of certain random matrices.<ref>{{cite journal| last=Keating|first=Jon|title=The Riemann zeta-function and quantum chaology| journal=Proc. Internat. School of Phys. Enrico Fermi |year=1993 | volume=CXIX| pages=145–185| doi=10.1016/b978-0-444-81588-0.50008-0| isbn=9780444815880}}</ref> The connection was first discovered by [[Hugh Montgomery (mathematician)|Hugh Montgomery]] and [[Freeman Dyson]]. It is connected to the [[Hilbert–Pólya conjecture]].


===Theoretical neuroscience===
===Free probability===


The relation of [[free probability]] with random matrices<ref>Mingo, James A.; Speicher, Roland (2017): [http://rolandspeicher.com/literature/mingo-speicher/ Free Probability and Random Matrices]. Fields Institute Monographs, Vol. 35, Springer, New York</ref> is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu;<ref>Voiculescu, Dan (1991): "Limit laws for random matrices and free products". Inventiones mathematicae 104.1: 201-220</ref> he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context.
In the field of theoretical neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos<ref>{{cite journal|last=Sompolinsky|first=H.|author2=Crisanti, A. |author3=Sommers, H. |title=Chaos in Random Neural Networks|journal=Physical Review Letters|date=July 1988|volume=61|issue=3|pages=259–262|doi=10.1103/PhysRevLett.61.259|pmid=10039285|bibcode = 1988PhRvL..61..259S }}</ref> when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Relating the statistical properties of the spectrum of biologically inspired random matrix models to the dynamical behavior of randomly connected neural networks is an intensive research topic.<ref>{{cite journal|last=García del Molino|first=Luis Carlos|author2=Pakdaman, Khashayar |author3=Touboul, Jonathan |author4= Wainrib, Gilles |title=Synchronization in random balanced networks|journal=Physical Review E|date=October 2013|volume=88|issue=4|pages=042824|doi=10.1103/PhysRevE.88.042824|pmid=24229242|bibcode = 2013PhRvE..88d2824G |arxiv=1306.2576}}</ref><ref>{{cite journal|last=Rajan|first=Kanaka|author2=Abbott, L. |title=Eigenvalue Spectra of Random Matrices for Neural Networks|journal=Physical Review Letters|date=November 2006|volume=97|issue=18|pages=188104|doi=10.1103/PhysRevLett.97.188104|pmid=17155583|bibcode = 2006PhRvL..97r8104R }}</ref><ref>{{cite journal|last=Wainrib|first=Gilles|author2=Touboul, Jonathan |title=Topological and Dynamical Complexity of Random Neural Networks|journal=Physical Review Letters|date=March 2013|volume=110|issue=11|doi=10.1103/PhysRevLett.110.118101|arxiv = 1210.5082 |bibcode = 2013PhRvL.110k8101W|pmid=25166580|page=118101}}</ref><ref>{{cite journal|last=Timme|first=Marc|author2=Wolf, Fred |author3=Geisel, Theo |title=Topological Speed Limits to Network Synchronization|journal=Physical Review Letters|date=February 2004|volume=92|issue=7|doi=10.1103/PhysRevLett.92.074101|arxiv = cond-mat/0306512 |bibcode = 2004PhRvL..92g4101T |pmid=14995853 |page=074101}}</ref><ref>{{cite journal|last1=Muir|first1=Dylan|last2=Mrsic-Flogel|first2=Thomas|title=Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks|journal=Phys. Rev. E|date=2015|volume=91|issue=4|page=042808|doi=10.1103/PhysRevE.91.042808|pmid=25974548|bibcode = 2015PhRvE..91d2808M |url=http://edoc.unibas.ch/41441/1/20160120100936_569f4ed0ddeee.pdf}}</ref>


===Optimal control===
===Computational neuroscience===


In the field of computational neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos<ref>{{cite journal| last=Sompolinsky|first=H. |author2=Crisanti, A. |author3=Sommers, H. |title=Chaos in Random Neural Networks |journal=Physical Review Letters|date=July 1988|volume=61|issue=3|pages=259–262|doi=10.1103/PhysRevLett.61.259|pmid=10039285|bibcode = 1988PhRvL..61..259S |s2cid=16967637 }}</ref> when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation<ref>{{cite journal|last=Rajan|first=Kanaka|author2=Abbott, L. |title=Eigenvalue Spectra of Random Matrices for Neural Networks| journal=Physical Review Letters| date=November 2006|volume=97|issue=18| pages=188104 |doi=10.1103/PhysRevLett.97.188104 | pmid=17155583 |bibcode = 2006PhRvL..97r8104R }}</ref><ref>{{cite journal|last=Wainrib|first=Gilles|author2=Touboul, Jonathan |title=Topological and Dynamical Complexity of Random Neural Networks|journal=Physical Review Letters |date=March 2013|volume=110|issue=11 |doi=10.1103/PhysRevLett.110.118101 | arxiv = 1210.5082 |bibcode = 2013PhRvL.110k8101W |pmid=25166580|page=118101|s2cid=1188555}}</ref> and time to synchrony depends on network topology.<ref>{{cite journal |last=Timme|first=Marc |author2=Wolf, Fred |author3=Geisel, Theo |title=Topological Speed Limits to Network Synchronization | journal=Physical Review Letters| date=February 2004|volume=92 | issue=7| doi=10.1103/PhysRevLett.92.074101|arxiv = cond-mat/0306512 |bibcode = 2004PhRvL..92g4101T |pmid=14995853 |page=074101|s2cid=5765956}}</ref><ref>{{cite journal |last1=Muir|first1=Dylan | last2=Mrsic-Flogel|first2=Thomas |title=Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks |journal=Phys. Rev. E|date=2015|volume=91 |issue=4 |page=042808 |doi=10.1103/PhysRevE.91.042808|pmid=25974548|bibcode = 2015PhRvE..91d2808M |url=http://edoc.unibas.ch/41441/1/20160120100936_569f4ed0ddeee.pdf}}</ref>
In [[optimal control]] theory, the evolution of ''n'' state variables through time depends at any time on their own values and on the values of ''k'' control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of [[stochastic control]].<ref>{{cite book |last=Chow |first=Gregory P. |year=1976 |title=Analysis and Control of Dynamic Economic Systems |location=New York |publisher=Wiley |isbn=0-471-15616-7 }}</ref>{{rp|ch. 13}}<ref>{{cite journal |last=Turnovsky |first=Stephen |year=1976 |title=Optimal stabilization policies for stochastic linear systems: The case of correlated multiplicative and additive disturbances |journal=[[Review of Economic Studies]] |volume=43 |issue=1 |pages=191–194 |jstor=2296741 |doi=10.2307/2296614 }}</ref><ref>{{cite journal |last=Turnovsky |first=Stephen |year=1974 |title=The stability properties of optimal economic policies |journal=American Economic Review |volume=64 |issue=1 |pages=136–148 |jstor=1814888 }}</ref> A key result in the case of [[linear-quadratic control]] with stochastic matrices is that the [[certainty equivalence principle]] does not apply: while in the absence of [[multiplier uncertainty]] (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, this no longer holds in the presence of random coefficients in the state equation.


In the analysis of massive data such as [[fMRI]], random matrix theory has been applied in order to perform dimension reduction. When applying an algorithm such as [[Principal component analysis|PCA]], it is important to be able to select the number of significant components. The criteria for selecting components can be multiple (based on explained variance, Kaiser's method, eigenvalue, etc.). Random matrix theory in this content has its representative the [[Marchenko-Pastur distribution]], which guarantees the theoretical high and low limits of the eigenvalues associated with a random variable covariance matrix. This matrix calculated in this way becomes the null hypothesis that allows one to find the eigenvalues (and their eigenvectors) that deviate from the theoretical random range. The components thus excluded become the reduced dimensional space (see examples in fMRI <ref>{{Cite journal |last1=Vergani |first1=Alberto A. | last2=Martinelli | first2=Samuele |last3=Binaghi |first3=Elisabetta |date=July 2019 |title=Resting state fMRI analysis using unsupervised learning algorithms | journal=
==Gaussian ensembles==
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization |volume=8 |issue=3 | url=https://www.tandfonline.com/doi/abs/10.1080/21681163.2019.1636413 |publisher=Taylor&Francis |pages=2168–1171 |doi= 10.1080/21681163.2019.1636413}}</ref><ref>{{Cite journal|last1=Burda|first1=Z|last2=Kornelsen|first2=J|last3=Nowak|first3=MA|last4=Porebski|first4=B|last5=Sboto-Frankenstein|first5=U|last6=Tomanek|first6=B|last7=Tyburczyk|first7=J|title=Collective Correlations of Brodmann Areas fMRI Study with RMT-Denoising|journal=Acta Physica Polonica B|date=2013|volume=44|issue=6|page=1243|doi=10.5506/APhysPolB.44.1243|arxiv=1306.3825|bibcode=2013AcPPB..44.1243B}}
[[File:Random matrix eigenvalues.gif|500px|thumb|right|Distribution on the complex plane of a large number of 2x2 random matrices from 4 different Gaussian ensembles.]]
</ref>).


===Optimal control===
The most studied random matrix ensembles are the Gaussian ensembles.


In [[optimal control]] theory, the evolution of ''n'' state variables through time depends at any time on their own values and on the values of ''k'' control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of [[stochastic control]].<ref>{{cite book |last=Chow |first=Gregory P. |year=1976 |title=Analysis and Control of Dynamic Economic Systems |location=New York |publisher=Wiley |isbn=0-471-15616-7 }}</ref>{{rp|ch. 13}}<ref>{{cite journal |last=Turnovsky |first=Stephen |year=1974 |title=The stability properties of optimal economic policies |journal=American Economic Review |volume=64 |issue=1 |pages=136–148 |jstor=1814888 }}</ref> A key result in the case of [[linear-quadratic control]] with stochastic matrices is that the [[certainty equivalence principle]] does not apply: while in the absence of [[multiplier uncertainty]] (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.
The '''Gaussian unitary ensemble''' GUE(''n'') is described by the [[Gaussian measure]] with density


===Computational mechanics===
:<math> \frac{1}{Z_{\text{GUE}(n)}} e^{- \frac{n}{2} \mathrm{tr} H^2} </math>


In [[computational mechanics]], epistemic uncertainties underlying the lack of knowledge about the physics of the modeled system give rise to mathematical operators associated with the computational model, which are deficient in a certain sense. Such operators lack certain properties linked to unmodeled physics. When such operators are discretized to perform computational simulations, their accuracy is limited by the missing physics. To compensate for this deficiency of the mathematical operator, it is not enough to make the model parameters random, it is necessary to consider a mathematical operator that is random and can thus generate families of computational models in the hope that one of these captures the missing physics. Random matrices have been used in this sense,<ref>{{Cite journal|last=Soize|first=C.|date=2005-04-08|title=Random matrix theory for modeling uncertainties in computational mechanics
on the space of ''n&nbsp;×&nbsp;n'' [[Hermitian matrices]] ''H''&nbsp;=&nbsp;(''H''<sub>''ij''</sub>){{su|b=''i'',''j''=1|p=''n''}}. Here ''Z''<sub>GUE</sub>(''n'')&nbsp;=&nbsp;2<sup>''n''/2</sup>&nbsp;{{pi}}<sup>''n''<sup>2</sup>/2</sup> is a normalization constant, chosen so that the integral of the density is equal to one. The term ''unitary'' refers to the fact that the distribution is invariant under unitary conjugation.
|url=https://hal-upec-upem.archives-ouvertes.fr/hal-00686187/file/publi-2005-CMAME-194_12-16_1333-1366-soize-preprint.pdf|journal=Computer Methods in Applied Mechanics and Engineering|language=en|volume=194|issue=12–16|pages=1333–1366|doi=10.1016/j.cma.2004.06.038|bibcode=2005CMAME.194.1333S |s2cid=58929758 |issn=1879-2138}}</ref> with applications in vibroacoustics, wave propagations, materials science, fluid mechanics, heat transfer, etc.
The Gaussian unitary ensemble models [[Hamiltonian (quantum mechanics)|Hamiltonians]] lacking time-reversal symmetry.


===Engineering===
The '''Gaussian orthogonal ensemble''' GOE(''n'') is described by the Gaussian measure with density


Random matrix theory can be applied to the electrical and communications engineering research efforts to study, model and develop Massive Multiple-Input Multiple-Output ([[MIMO]]) radio systems.{{Citation needed|date=May 2024}}
:<math> \frac{1}{Z_{\text{GOE}(n)}} e^{- \frac{n}{4} \mathrm{tr} H^2} </math>


== History ==
on the space of ''n&nbsp;×&nbsp;n'' real symmetric matrices ''H''&nbsp;=&nbsp;(''H''<sub>''ij''</sub>){{su|b=''i'',''j''=1|p=''n''}}. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry.
{{Expand section|date=April 2024}}
Random matrix theory first gained attention beyond mathematics literature in the context of nuclear physics. Experiments by [[Enrico Fermi]] and others demonstrated evidence that individual [[Nucleon|nucleons]] cannot be approximated to move independently, leading [[Niels Bohr]] to formulate the idea of a [[compound nucleus]]. Because there was no knowledge of direct nucleon-nucleon interactions, [[Eugene Wigner]] and [[Leonard Eisenbud]] approximated that the nuclear [[Hamiltonian (quantum mechanics)|Hamiltonian]] could be modeled as a random matrix. For larger atoms, the distribution of the [[energy eigenvalues]] of the Hamiltonian could be computed in order to approximate [[Scattering cross section|scattering cross sections]] by invoking the [[Wishart distribution]].<ref>{{Cite web |url=https://academic.oup.com/edited-volume/43656/chapter/365878918 |access-date=2024-04-22 |website=academic.oup.com |doi=10.1093/oxfordhb/9780198744191.013.2 |title=History – an overview |date=2015 |editor-last1=Akemann |editor-last2=Baik |editor-last3=Di Francesco |editor-first1=Gernot |editor-first2=Jinho |editor-first3=Philippe |last1=Bohigas |first1=Oriol |last2=Weidenmuller |first2=Hans |pages=15–40 |isbn=978-0-19-874419-1 }}</ref>


==Gaussian ensembles==
The '''Gaussian symplectic ensemble''' GSE(''n'') is described by the Gaussian measure with density
The most-commonly studied random matrix [[probability distribution|distributions]] are the Gaussian ensembles: GOE, GUE and GSE. They are often denoted by their [[Freeman Dyson|Dyson]] index, ''β''&nbsp;=&nbsp;1 for GOE, ''β''&nbsp;=&nbsp;2 for GUE, and ''β''&nbsp;=&nbsp;4 for GSE. This index counts the number of real components per matrix element.

:<math> \frac{1}{Z_{\text{GSE}(n)}} e^{- n \mathrm{tr} H^2} \, </math>


=== Definitions ===
on the space of ''n&nbsp;×&nbsp;n'' Hermitian [[Quaternionic matrix|quaternionic matrices]], e.g. symmetric square matrices composed of [[quaternion]]s, ''H''&nbsp;=&nbsp;(''H''<sub>''ij''</sub>){{su|b=''i'',''j''=1|p=''n''}}. Its distribution is invariant under conjugation by the [[symplectic group]], and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.
The '''Gaussian unitary ensemble''' <math>\text{GUE}(n)</math> is described by the [[Gaussian measure]] with density
<math display="block"> \frac{1}{Z_{\text{GUE}(n)}} e^{- \frac{n}{2} \mathrm{tr} H^2} </math>
on the space of <math>n \times n</math> [[Hermitian matrices]] <math>H = (H_{ij})_{i,j=1}^n</math>. Here
<math display="block">Z_{\text{GUE}(n)} = 2^{n/2} \left(\frac{\pi}{n}\right)^{\frac{1}{2}n^2} </math>
is a normalization constant, chosen so that the integral of the density is equal to one. The term ''unitary'' refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models [[Hamiltonian (quantum mechanics)|Hamiltonians]] lacking time-reversal symmetry.


The '''Gaussian orthogonal ensemble''' <math>\text{GOE}(n)</math> is described by the Gaussian measure with density
The Gaussian ensembles GOE, GUE and GSE are often denoted by their [[freeman dyson|Dyson]] index, ''β''&nbsp;=&nbsp;1 for GOE, ''β''&nbsp;=&nbsp;2 for GUE, and ''β''&nbsp;=&nbsp;4 for GSE. This index counts the number of real components per matrix element. The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨''H''<sub>''ij''</sub>⟩ = 0, and two-point correlations given by
<math display="block"> \frac{1}{Z_{\text{GOE}(n)}} e^{- \frac{n}{4} \mathrm{tr} H^2} </math>
on the space of ''n''&nbsp;×&nbsp;''n'' real symmetric matrices ''H''&nbsp;=&nbsp;(''H''<sub>''ij''</sub>){{su|b=''i'',''j''=1|p=''n''}}. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. Equivalently, it is generated by <math>H = (G+G^T)/\sqrt{2n}</math>, where <math>G</math> is an <math>n\times n</math> matrix with IID samples from the standard normal distribution.


The '''Gaussian symplectic ensemble''' <math>\text{GSE}(n)</math> is described by the Gaussian measure with density
: <math> \langle H_{ij} H_{mn}^* \rangle = \langle H_{ij} H_{nm} \rangle = \frac{1}{n} \delta_{im} \delta_{jn} + \frac{2 - \beta}{n \beta}\delta_{in}\delta_{jm} </math>,
<math display="block"> \frac{1}{Z_{\text{GSE}(n)}} e^{- n \mathrm{tr} H^2} </math>
on the space of ''n''&nbsp;×&nbsp;''n'' Hermitian [[Quaternionic matrix|quaternionic matrices]], e.g. symmetric square matrices composed of [[quaternion]]s, {{math|1=''H'' = (''H''<sub>''ij''</sub>){{su|b=''i'',''j''=1|p=''n''}}}}. Its distribution is invariant under conjugation by the [[symplectic group]], and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.


=== Point correlation functions ===
The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨''H''<sub>''ij''</sub>⟩ = 0, and two-point correlations given by
<math display="block"> \langle H_{ij} H_{mn}^* \rangle = \langle H_{ij} H_{nm} \rangle = \frac{1}{n} \delta_{im} \delta_{jn} + \frac{2 - \beta}{n \beta}\delta_{in}\delta_{jm} ,</math>
from which all higher correlations follow by [[Isserlis' theorem]].
from which all higher correlations follow by [[Isserlis' theorem]].


=== Moment generating functions ===
The joint [[probability density function|probability density]] for the [[Eigenvalue, eigenvector and eigenspace|eigenvalues]] ''&lambda;''<sub>1</sub>,''&lambda;''<sub>2</sub>,...,''&lambda;''<sub>''n''</sub> of GUE/GOE/GSE is given by
The [[Moment-generating function|moment generating function]] for the GOE is<math display="block">E[e^{tr(VH)}] = e^{\frac{1}{4N}\|V + V^T\|_F^2}</math>where <math>\|\cdot \|_F</math> is the [[Matrix norm|Frobenius norm]].


=== Spectral density ===
: <math>\frac{1}{Z_{\beta, n}} \prod_{k=1}^n e^{-\frac{\beta n}{4}\lambda_k^2}\prod_{i<j}\left|\lambda_j-\lambda_i\right|^\beta~, \quad (1)</math>
[[File:Spectral density of gaussian ensembels, N = 1 to 32.png|thumb|361x361px|Spectral density of GOE/GUE/GSE, as <math>N = 2^0, 2^1, ..., 2^5</math>. They are normalized so that the distributions converge to the [[Wigner semicircle distribution|semicircle distribution]]. The number of "humps" is equal to N.]]
The joint [[probability density function|probability density]] for the [[Eigenvalue, eigenvector and eigenspace|eigenvalues]] {{math|''λ''<sub>1</sub>, ''λ''<sub>2</sub>, ..., ''λ''<sub>''n''</sub>}} of GUE/GOE/GSE is given by
{{NumBlk||<math display="block">\frac{1}{Z_{\beta, n}} \prod_{k=1}^n e^{-\frac{\beta}{4}\lambda_k^2}\prod_{i<j}\left|\lambda_j-\lambda_i\right|^\beta~,</math>|{{EquationRef|1}}}}
where ''Z''<sub>''β'',''n''</sub> is a normalization constant which can be explicitly computed, see [[Selberg integral]]. In the case of GUE (''β''&nbsp;=&nbsp;2), the formula (1) describes a [[determinantal point process]]. Eigenvalues repel as the joint probability density has a zero (of <math>\beta</math>th order) for coinciding eigenvalues <math>\lambda_j = \lambda_i</math>.


The distribution of the largest eigenvalue for GOE, and GUE, are explicitly solvable.<ref>{{cite journal |author=Chiani M| title=Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the Tracy-Widom distribution |journal=Journal of Multivariate Analysis |volume=129|pages=69–81|year=2014|doi=10.1016/j.jmva.2014.04.002|arxiv = 1209.3394 |s2cid=15889291}}</ref> They converge to the [[Tracy–Widom distribution]] after shifting and scaling appropriately.
where ''Z''<sub>''&beta;'',''n''</sub> is a normalization constant which can be explicitly computed, see [[Selberg integral]]. In the case of GUE (''β''&nbsp;=&nbsp;2), the formula (1) describes a [[determinantal point process]]. Eigenvalues repel as the joint probability density has a zero (of <math>\beta</math>th order) for coinciding eigenvalues <math>\lambda_j=\lambda_i</math>.


=== Convergence to Wigner semicircular distribution ===
For the distribution of the largest eigenvalue for GOE, GUE and Wishart matrices of finite dimensions, see.<ref>{{cite journal |author=Chiani M|title=Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the Tracy-Widom distribution|journal=Journal of Multivariate Analysis |volume=129|pages=69–81|year=2014|doi=10.1016/j.jmva.2014.04.002|arxiv = 1209.3394 }}</ref>
The spectrum, divided by <math>\sqrt{N\sigma^2}</math>, converges in distribution to the [[Wigner semicircle distribution|semicircular distribution]] on the interval <math>[-2, +2]</math>: <math>\rho(x) = \frac{1}{2 \pi}\sqrt{4-x^2}</math>. Here <math>\sigma^2</math> is the variance of off-diagonal entries. The variance of the on-diagonal entries do not matter.


===Distribution of level spacings===
===Distribution of level spacings===


From the ordered sequence of eigenvalues <math>\lambda_1 < \ldots < \lambda_n < \lambda_{n+1} < \ldots</math>, one defines the normalized [[Level-spacing distribution|spacings]] <math>s = (\lambda_{n+1} - \lambda_n)/\langle s \rangle</math>, where <math>\langle s \rangle =\langle \lambda_{n+1} - \lambda_n \rangle</math> is the mean spacing. The probability distribution of spacings is approximately given by,
From the ordered sequence of eigenvalues <math>\lambda_1 < \ldots < \lambda_n < \lambda_{n+1} < \ldots</math>, one defines the normalized [[Level-spacing distribution|spacings]] <math>s = (\lambda_{n+1} - \lambda_n)/\langle s \rangle</math>, where <math>\langle s \rangle =\langle \lambda_{n+1} - \lambda_n \rangle</math> is the mean spacing. The probability distribution of spacings is approximately given by,
<math display="block"> p_1(s) = \frac{\pi}{2}s\, e^{-\frac{\pi}{4} s^2} </math>

for the orthogonal ensemble GOE <math>\beta=1</math>,
: <math> p_1(s) = \frac{\pi}{2}s\, \mathrm{e}^{-\frac{\pi}{4} s^2} </math>
<math display="block"> p_2(s) = \frac{32}{\pi^2}s^2 \mathrm{e}^{-\frac{4}{\pi} s^2} </math>
for the orthogonal ensemble GOE <math>\beta=1</math>,
for the unitary ensemble GUE <math>\beta=2</math>, and
<math display="block"> p_4(s) = \frac{2^{18}}{3^6\pi^3}s^4 e^{-\frac{64}{9\pi} s^2} </math>
for the symplectic ensemble GSE <math>\beta = 4</math>.
:<math> p_2(s) = \frac{32}{\pi^2}s^2 \mathrm{e}^{-\frac{4}{\pi} s^2} </math>
for the unitary ensemble GUE <math>\beta=2</math>, and
:<math> p_4(s) = \frac{2^{18}}{3^6\pi^3}s^4 \mathrm{e}^{-\frac{64}{9\pi} s^2} </math>
for the symplectic ensemble GSE <math>\beta=4</math>.


The numerical constants are such that <math> p_\beta(s) </math> is normalized:
The numerical constants are such that <math> p_\beta(s) </math> is normalized:
<math display="block"> \int_0^\infty ds\,p_\beta(s) = 1 </math>

:<math> \int_0^\infty ds\,p_\beta(s) = 1 </math>

and the mean spacing is,
and the mean spacing is,
<math display="block"> \int_0^\infty ds\, s\, p_\beta(s) = 1, </math>

:<math> \int_0^\infty ds\, s\, p_\beta(s) = 1, </math>

for <math> \beta = 1,2,4 </math>.
for <math> \beta = 1,2,4 </math>.


==Generalizations==
==Generalizations==


''Wigner matrices'' are random Hermitian matrices <math> \textstyle H_n = (H_n(i,j))_{i,j=1}^n </math> such that the entries
''Wigner matrices'' are random Hermitian matrices <math display="inline"> H_n = (H_n(i,j))_{i,j=1}^n </math> such that the entries
:<math> \left\{ H_n(i, j)~, \, 1 \leq i \leq j \leq n \right\} </math>
<math display="block"> \left\{ H_n(i, j)~, \, 1 \leq i \leq j \leq n \right\} </math>
above the main diagonal are independent random variables with zero mean, and
above the main diagonal are independent random variables with zero mean and have identical second moments.
:<math> \left\{ H_n(i, j)~, \, 1 \leq i < j \leq n \right\} </math>
have identical second moments.


''Invariant matrix ensembles'' are random Hermitian matrices with density on the space of real symmetric/ Hermitian/ quaternionic Hermitian matrices, which is of the form
''Invariant matrix ensembles'' are random Hermitian matrices with density on the space of real symmetric/Hermitian/quaternionic Hermitian matrices, which is of the form <math display="inline"> \frac{1}{Z_n} e^{- n V(\mathrm{tr}(H))}~, </math> where the function {{math|''V''}} is called the potential.
<math>\textstyle \frac{1}{Z_n} e^{- n \mathrm{tr} V(H)}~, </math>
where the function ''V'' is called the potential.


The Gaussian ensembles are the only common special cases of these two classes of random matrices. This is a consequence of a theorem by Porter and Rosenzweig.<ref>{{Cite journal |last1=Porter |first1=C. E. |last2=Rosenzweig |first2=N. |date=1960-01-01 |title=STATISTICAL PROPERTIES OF ATOMIC AND NUCLEAR SPECTRA |url=https://www.osti.gov/biblio/4147616 |journal=Ann. Acad. Sci. Fennicae. Ser. A VI |language=English |volume=44|osti=4147616 }}</ref><ref>{{Citation |last1=Livan |first1=Giacomo |title=Classified Material |date=2018 |url=https://doi.org/10.1007/978-3-319-70885-0_3 |work=Introduction to Random Matrices: Theory and Practice |pages=15–21 |editor-last=Livan |editor-first=Giacomo |access-date=2023-05-17 |series=SpringerBriefs in Mathematical Physics |place=Cham |publisher=Springer International Publishing |language=en |doi=10.1007/978-3-319-70885-0_3 |isbn=978-3-319-70885-0 |last2=Novaes |first2=Marcel |last3=Vivo |first3=Pierpaolo |volume=26 |editor2-last=Novaes |editor2-first=Marcel |editor3-last=Vivo |editor3-first=Pierpaolo}}</ref>
The Gaussian ensembles are the only common special cases of these two classes of random matrices.


==Spectral theory of random matrices==
==Spectral theory of random matrices==


The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.
The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity. <ref>{{cite arXiv |last=Meckes |first=Elizabeth |title=The Eigenvalues of Random Matrices |date=2021-01-08 |class=math.PR |eprint=2101.02928}}</ref>


=== Empirical spectral measure ===
===Global regime===
The ''empirical spectral measure'' {{math|''μ<sub>H</sub>''}} of {{math|''H''}} is defined by<math display="block"> \mu_{H}(A) = \frac{1}{n} \, \# \left\{ \text{eigenvalues of }H\text{ in }A \right\} = N_{1_A, H}, \quad A \subset \mathbb{R}. </math>
In the ''global regime'', one is interested in the distribution of linear statistics of the form ''N<sub>f, H</sub> = n<sup>−1</sup>'' tr ''f(H)''.


Usually, the limit of <math> \mu_{H} </math> is a deterministic measure; this is a particular case of [[self-averaging]]. The [[cumulative distribution function]] of the limiting measure is called the [[density of states|integrated density of states]] and is denoted ''N''(''λ''). If the integrated density of states is differentiable, its derivative is called the [[density of states]] and is denoted&nbsp;''ρ''(''λ'').
====Empirical spectral measure====


==== Alternative expressions ====
The ''empirical spectral measure'' ''μ<sub>H</sub>'' of ''H'' is defined by
:<math> \mu_{H}(A) = \frac{1}{n} \, \# \left\{ \text{eigenvalues of }H\text{ in }A \right\} = N_{1_A, H}, \quad A \subset \mathbb{R}. </math>
<math display="block"> \mu_{H}(A) = \frac 1n \sum_i \delta_{\lambda_i}</math>


=== Types of convergence ===
Usually, the limit of <math> \mu_{H} </math> is a deterministic measure; this is a particular case of [[self-averaging]]. The [[cumulative distribution function]] of the limiting measure is called the [[density of states|integrated density of states]] and is denoted ''N''(''λ''). If the integrated density of states is differentiable, its derivative is called the [[density of states]] and is denoted&nbsp;''ρ''(''λ'').
Given a matrix ensemble, we say that its spectral measures converge '''weakly''' to <math>\rho</math> iff for any measurable set <math>A</math>, the ensemble-average converges:<math display="block">\lim_{n \to \infty} \mathbb E_H[\mu_H(A)] = \rho(A)</math>Convergence '''weakly almost surely''': If we sample <math>H_1, H_2, H_3, \dots</math> independently from the ensemble, then with probability 1,<math display="block">\lim_{n \to \infty} \mu_{H_n}(A) = \rho(A)</math>for any measurable set <math>A</math>.


'''In another sense''', weak almost sure convergence means that we sample <math>H_1, H_2, H_3, \dots</math>, not independently, but by "growing" (a [[stochastic process]]), then with probability 1, <math>\lim_{n \to \infty} \mu_{H_n}(A) = \rho(A)
The limit of the empirical spectral measure for Wigner matrices was described by [[Eugene Wigner]]; see [[Wigner semicircle distribution]] and [[Wigner surmise]]. As far as sample covariance matrices are concerned, a theory was developed by Marčenko and Pastur.<ref name=MP>.{{cite journal |last1=Marčenko |first1=V A |last2=Pastur |first2=L A |title=Distribution of eigenvalues for some sets of random matrices |journal=Mathematics of the USSR-Sbornik |volume=1 |issue=4 |year=1967 |doi=10.1070/SM1967v001n04ABEH001994 |pages=457–483 |bibcode = 1967SbMat...1..457M }}</ref><ref name=pastur72>{{cite journal|last=Pastur|first=L.A.|title=Spectra of random self-adjoint operators|journal=Russ. Math. Surv.|year=1973|volume=28|issue=1|pages=1–67|doi=10.1070/RM1973v028n01ABEH001396|bibcode = 1973RuMaS..28....1P }}</ref>
</math> for any measurable set <math>A</math>.


For example, we can "grow" a sequence of matrices from the Gaussian ensemble as follows:
The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from [[potential theory]].<ref>{{cite journal|last=Pastur|first=L.|last2=Shcherbina|first2 = M.|author2-link= Mariya Shcherbina |title=On the Statistical Mechanics Approach in the Random Matrix Theory: Integrated Density of States|journal=J. Stat. Phys.|year=1995|volume=79|issue=3–4|pages=585–611|doi=10.1007/BF02184872|bibcode = 1995JSP....79..585D }}</ref>

* Sample an infinite doubly infinite sequence of standard random variables <math>\{G_{i, j}\}_{i, j = 1, 2, 3, \dots}
</math>.
* Define each <math>H_n = (G_n+G_n^T)/\sqrt{2n}</math> where <math>G_n</math> is the matrix made of entries <math>\{G_{i, j}\}_{i, j = 1, 2,\dots, n}
</math>.

Note that generic matrix ensembles do not allow us to grow, but most of the common ones, such as the three Gaussian ensembles, do allow us to grow.

===Global regime===
In the ''global regime'', one is interested in the distribution of linear statistics of the form <math>N_{f, H} = n^{-1} \text{tr} f(H)</math>.

The limit of the empirical spectral measure for Wigner matrices was described by [[Eugene Wigner]]; see [[Wigner semicircle distribution]] and [[Wigner surmise]]. As far as sample covariance matrices are concerned, a [[Marchenko–Pastur distribution|theory was developed by Marčenko and Pastur]].<ref name=MP>.{{cite journal |last1=Marčenko |first1=V A |last2=Pastur |first2=L A |title=Distribution of eigenvalues for some sets of random matrices |journal=Mathematics of the USSR-Sbornik |volume=1 |issue=4 |year=1967 |doi=10.1070/SM1967v001n04ABEH001994 |pages=457–483 |bibcode = 1967SbMat...1..457M }}</ref><ref name=pastur72>{{harvnb|Pastur|1973}}</ref>

The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from [[potential theory]].<ref>{{cite journal|last1=Pastur|first1=L.|last2=Shcherbina|first2 = M.|author2-link= Mariya Shcherbina |title=On the Statistical Mechanics Approach in the Random Matrix Theory: Integrated Density of States |journal=J. Stat. Phys. |year=1995 |volume=79|issue=3–4 |pages=585–611 |doi=10.1007/BF02184872|bibcode = 1995JSP....79..585D |s2cid=120731790}}</ref>


====Fluctuations====
====Fluctuations====


For the linear statistics ''N''<sub>''f'',''H''</sub>&nbsp;=&nbsp;''n''<sup>−1</sup>&nbsp;∑&nbsp;''f''(''&lambda;''<sub>''j''</sub>), one is also interested in the fluctuations about ∫&nbsp;''f''(''&lambda;'')&nbsp;''dN''(''&lambda;''). For many classes of random matrices, a central limit theorem of the form
For the linear statistics {{math|1=''N''<sub>''f'',''H''</sub> = ''n''<sup>−1</sup> Σ ''f''(''λ''<sub>''j''</sub>)}}, one is also interested in the fluctuations about ∫&nbsp;''f''(''λ'')&nbsp;''dN''(''λ''). For many classes of random matrices, a central limit theorem of the form
:<math> \frac{N_{f,H} - \int f(\lambda) \, dN(\lambda)}{\sigma_{f, n}} \overset{D}{\longrightarrow} N(0, 1) </math>
<math display="block"> \frac{N_{f,H} - \int f(\lambda) \, dN(\lambda)}{\sigma_{f, n}} \overset{D}{\longrightarrow} N(0, 1) </math>
is known, see,<ref>{{cite journal|last=Johansson|first=K.|title=On fluctuations of eigenvalues of random Hermitian matrices|journal=Duke Math. J.|year=1998|volume=91|issue=1|pages=151–204|doi=10.1215/S0012-7094-98-09108-6}}</ref><ref>{{cite journal|last=Pastur|first=L.A.|title=A simple approach to the global regime of Gaussian ensembles of random matrices|journal=Ukrainian Math. J.|year=2005|volume=57|issue=6|pages=936–966|doi=10.1007/s11253-005-0241-4}}</ref> etc.
is known.<ref>{{cite journal| last=Johansson|first=K.| title=On fluctuations of eigenvalues of random Hermitian matrices|journal=Duke Math. J. | year=1998| volume=91| issue=1| pages=151–204 |doi=10.1215/S0012-7094-98-09108-6}}</ref><ref>{{cite journal |last=Pastur|first=L.A. |title=A simple approach to the global regime of Gaussian ensembles of random matrices|journal=Ukrainian Math. J.| year=2005|volume=57|issue=6|pages=936–966 |doi=10.1007/s11253-005-0241-4 | s2cid=121531907| url=http://dspace.nbuv.gov.ua/handle/123456789/165749}}</ref>

==== The variational problem for the unitary ensembles ====
Consider the measure
:<math>\mathrm{d}\mu_N(\mu)=\frac{1}{\widetilde{Z}_N}e^{-H_N(\lambda)}\mathrm{d}\lambda,\qquad H_N(\lambda)=-\sum\limits_{j\neq k}\ln|\lambda_j-\lambda_k|+N\sum\limits_{j=1}^N Q(\lambda_j),</math>
where <math>Q(M)</math> is the potential of the ensemble and let <math>\nu</math> be the empirical spectral measure.

We can rewrite <math>H_N(\lambda)</math> with <math>\nu</math> as
:<math>H_N(\lambda)=N^2\left[-\int\int_{x\neq y}\ln |x-y|\mathrm{d}\nu(x)\mathrm{d}\nu(y)+\int Q(x)\mathrm{d}\nu(x)\right],</math>

the probability measure is now of the form
:<math>\mathrm{d}\mu_N(\mu)=\frac{1}{\widetilde{Z}_N}e^{-N^2 I_Q(\nu)}\mathrm{d}\lambda,</math>
where <math>I_Q(\nu)</math> is the above functional inside the squared brackets.

Let now
:<math>M_1(\mathbb{R})=\left\{\nu:\nu\geq 0,\ \int_{\mathbb{R}}\mathrm{d}\nu = 1\right\}</math>
be the space of one-dimensional probability measures and consider the minimizer
:<math>E_Q=\inf\limits_{\nu \in M_1(\mathbb{R})}-\int\int_{x\neq y} \ln |x-y|\mathrm{d}\nu(x)\mathrm{d}\nu(y)+\int Q(x)\mathrm{d}\nu(x).</math>

For <math>E_Q</math> there exists a unique equilibrium measure <math>\nu_{Q}</math> through the [[Calculus of variations|Euler-Lagrange variational conditions]] for some real constant <math>l</math>
:<math>2\int_\mathbb{R}\log |x-y|\mathrm{d}\nu(y)-Q(x)=l,\quad x\in J</math>
:<math>2\int_\mathbb{R}\log |x-y|\mathrm{d}\nu(y)-Q(x)\leq l,\quad x\in \mathbb{R}\setminus J</math>
where <math>J=\bigcup\limits_{j=1}^q[a_j,b_j]</math> is the support of the measure and define
:<math>q(x)=-\left(\frac{Q'(x)}{2}\right)^2+\int \frac{Q'(x)-Q'(y)}{x-y}\mathrm{d}\nu_{Q}(y)</math>.
The equilibrium measure <math>\nu_{Q}</math> has the following Radon–Nikodym density
:<math>\frac{\mathrm{d}\nu_{Q}(x)}{\mathrm{d}x}=\frac{1}{\pi}\sqrt{q(x)}.</math><ref name="Harnad1">{{cite book|title=Random Matrices, Random Processes and Integrable Systems|first1=John|last1=Harnad|date=15 July 2013 |publisher=Springer|pages=263–266|isbn=978-1461428770}}</ref>

=== Mesoscopic regime ===
<ref>{{Cite journal |last1=Erdős |first1=László |last2=Schlein |first2=Benjamin |last3=Yau |first3=Horng-Tzer |date=April 2009 |title=Local Semicircle Law and Complete Delocalization for Wigner Random Matrices |url=http://link.springer.com/10.1007/s00220-008-0636-9 |journal=Communications in Mathematical Physics |language=en |volume=287 |issue=2 |pages=641–655 |arxiv=0803.0542 |bibcode=2009CMaPh.287..641E |doi=10.1007/s00220-008-0636-9 |issn=0010-3616}}</ref><ref name=":3" /> The typical statement of the Wigner semicircular law is equivalent to the following statement: For each ''fixed'' interval <math>[\lambda_0 - \Delta \lambda, \lambda_0 +
\Delta \lambda]</math> centered at a point <math>\lambda_0</math>, as <math>N</math>, the number of dimensions of the gaussian ensemble increases, the proportion of the eigenvalues falling within the interval converges to <math>\int_{[\lambda_0 - \Delta \lambda, \lambda_0 +
\Delta \lambda]} \rho(t) dt</math>, where <math>\rho(t)</math> is the density of the semicircular distribution.

If <math>\Delta \lambda</math> can be allowed to decrease as <math>N</math> increases, then we obtain strictly stronger theorems, named "local laws" or "mesoscopic regime".

The mesoscopic regime is intermediate between the local and the global. In the ''mesoscopic regime'', one is interested in the limit distribution of eigenvalues in a set that shrinks to zero, but slow enough, such that the number of eigenvalues inside <math>\to \infty
</math>.

For example, the Ginibre ensemble has a mesoscopic law: For any sequence of shrinking disks with areas <math>u
</math>inside the unite disk, if the disks have area <math>A_n = O(n^{-1+\epsilon })
</math>, the conditional distribution of the spectrum inside the disks also converges to a uniform distribution. That is, if we cut the shrinking disks along with the spectrum falling inside the disks, then scale the disks up to unit area, we would see the spectra converging to a flat distribution in the disks.<ref name=":3">{{Cite journal |last1=Bourgade |first1=Paul |last2=Yau |first2=Horng-Tzer |last3=Yin |first3=Jun |date=2014-08-01 |title=Local circular law for random matrices |url=https://doi.org/10.1007/s00440-013-0514-z |journal=Probability Theory and Related Fields |language=en |volume=159 |issue=3 |pages=545–595 |doi=10.1007/s00440-013-0514-z |issn=1432-2064|arxiv=1206.1449 }}</ref>


===Local regime===
===Local regime===
In the ''local regime'', one is interested in the spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/''n''. One distinguishes between ''bulk statistics'', pertaining to intervals inside the support of the limiting spectral measure, and ''edge statistics'', pertaining to intervals near the boundary of the support.
In the ''local regime'', one is interested in the limit distribution of eigenvalues in a set that shrinks so fast that the number of eigenvalues remains <math>O(1)
</math>.
Typically this means the study of spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/''n''. One distinguishes between ''bulk statistics'', pertaining to intervals inside the support of the limiting spectral measure, and ''edge statistics'', pertaining to intervals near the boundary of the support.


====Bulk statistics====
====Bulk statistics====
Formally, fix <math>\lambda_0</math> in the [[Interior (topology)|interior]] of the [[Support (measure theory)|support]] of <math>N(\lambda)</math>. Then consider the [[point process]]
Formally, fix <math>\lambda_0</math> in the [[Interior (topology)|interior]] of the [[Support (measure theory)|support]] of <math>N(\lambda)</math>. Then consider the [[point process]]
<math display="block"> \Xi(\lambda_0) = \sum_j \delta\Big({\cdot} - n \rho(\lambda_0) (\lambda_j - \lambda_0) \Big)~,</math>

:<math> \Xi(\lambda_0) = \sum_j \delta\Big({\cdot} - n \rho(\lambda_0) (\lambda_j - \lambda_0) \Big)~,</math>
where <math>\lambda_j</math> are the eigenvalues of the random matrix.
where <math>\lambda_j</math> are the eigenvalues of the random matrix.


The point process <math>\Xi(\lambda_0)</math> captures the statistical properties of eigenvalues in the vicinity of <math>\lambda_0</math>. For the [[#Gaussian_ensembles|Gaussian ensembles]], the limit of <math>\Xi(\lambda_0)</math> is known;<ref name=mehta /> thus, for GUE it is a [[determinantal point process]] with the kernel
The point process <math>\Xi(\lambda_0)</math> captures the statistical properties of eigenvalues in the vicinity of <math>\lambda_0</math>. For the [[#Gaussian ensembles|Gaussian ensembles]], the limit of <math>\Xi(\lambda_0)</math> is known;<ref name=mehta /> thus, for GUE it is a [[determinantal point process]] with the kernel
<math display="block"> K(x, y) = \frac{\sin \pi(x-y)}{\pi(x-y)} </math>

:<math> K(x, y) = \frac{\sin \pi(x-y)}{\pi(x-y)} </math>
(the ''sine kernel'').
(the ''sine kernel'').


The ''universality'' principle postulates that the limit of <math>\Xi(\lambda_0)</math> as <math>n \to \infty</math> should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on <math>\lambda_0</math>). This was rigorously proved for several models of random matrices: for invariant matrix ensembles,<ref>{{cite journal
The ''universality'' principle postulates that the limit of <math>\Xi(\lambda_0)</math> as <math>n \to \infty</math> should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on <math>\lambda_0</math>). Rigorous proofs of universality are known for invariant matrix ensembles<ref>{{cite journal
| last1 = Pastur | first1 = L.
| last1 = Pastur | first1 = L.
| last2 = Shcherbina | first2 = M.|author2-link= Mariya Shcherbina
| last2 = Shcherbina | first2 = M.|author2-link= Mariya Shcherbina
Line 161: Line 221:
| doi = 10.1007/BF02180200
| doi = 10.1007/BF02180200
| bibcode = 1997JSP....86..109P
| bibcode = 1997JSP....86..109P
| s2cid = 15117770
}}</ref><ref>{{cite journal
| url = http://purl.umn.edu/2773
}}</ref><ref>{{cite journal
| last1 = Deift | first1 = P.
| last1 = Deift | first1 = P.
| last2 = Kriecherbauer | first2 = T.
| last2 = Kriecherbauer | first2 = T.
Line 173: Line 235:
| pages = 759–782
| pages = 759–782
| doi = 10.1155/S1073792897000500 | volume=1997
| doi = 10.1155/S1073792897000500 | volume=1997
| doi-access = free
}}</ref>
for Wigner matrices,<ref>{{cite journal
}}</ref> and Wigner matrices.<ref>{{cite journal
| last1 = Erdős | first1 = L.
| last1 = Erdős | first1 = L.
| last2 = Péché | first2 = S. | author2-link = Sandrine Péché
| last2 = Péché | first2 = S. | author2-link = Sandrine Péché
Line 186: Line 248:
| issue = 7
| issue = 7
| pages = 895–925
| pages = 895–925
| doi = 10.1002/cpa.20317
}}</ref><ref>{{cite journal
| arxiv= 0905.4176
}}</ref><ref>{{cite journal
| last1 = Tao | first1 = Terence | authorlink1 = Terence Tao
| last1 = Tao | first1 = Terence | authorlink1 = Terence Tao
| last2 = Vu | first2 = Van H. | authorlink2 = Van H. Vu
| last2 = Vu | first2 = Van H. | authorlink2 = Van H. Vu
Line 198: Line 262:
| bibcode = 2010CMaPh.298..549T
| bibcode = 2010CMaPh.298..549T
| arxiv = 0908.1982
| arxiv = 0908.1982
}}</ref>
| s2cid = 16594369 }}</ref>
et cet.


====Edge statistics====
====Edge statistics====
{{Main|Tracy–Widom distribution}}One example of edge statistics is the [[Tracy–Widom distribution]].


As another example, consider the Ginibre ensemble. It can be real or complex. The real Ginibre ensemble has i.i.d. standard Gaussian entries <math>\mathcal N(0, 1)</math>, and the complex Ginibre ensemble has i.i.d. standard complex Gaussian entries <math>\mathcal N(0, 1/2) + i\mathcal N(0, 1/2)</math>.
See [[Tracy–Widom distribution]].

Now let <math>G_n</math> be sampled from the real or complex ensemble, and let <math>\rho(G_n)</math> be the absolute value of its maximal eigenvalue:<math display="block">\rho(G_n) := \max_j |\lambda_j|</math>We have the following theorem for the edge statistics:<ref>{{Cite journal |last=Rider |first=B |date=2003-03-28 |title=A limit theorem at the edge of a non-Hermitian random matrix ensemble |url=https://iopscience.iop.org/article/10.1088/0305-4470/36/12/331 |journal=Journal of Physics A: Mathematical and General |volume=36 |issue=12 |pages=3401–3409 |doi=10.1088/0305-4470/36/12/331 |bibcode=2003JPhA...36.3401R |issn=0305-4470}}</ref>

{{Math theorem
| name = Edge statistics of the Ginibre ensemble
| note =
| math_statement = For <math>G_n</math> and <math>\rho\left(G_n\right)</math> as above, with probability one,
<math display=block>\lim _{n \rightarrow \infty} \frac{1}{\sqrt{n}} \rho\left(G_n\right)=1</math>

Moreover, if <math>\gamma_n=\log \left(\frac{n}{2 \pi}\right)-2 \log (\log (n))</math> and
<math display=block>Y_n:=\sqrt{4 n \gamma_n}\left(\frac{1}{\sqrt{n}} \rho\left(G_n\right)-1-\sqrt{\frac{\gamma_n}{4 n}}\right),</math>
then <math>Y_n</math> converges in distribution to the [[Gumbel law]], i.e., the probability measure on <math>\mathbb{R}</math> with cumulative distribution function <math>F_{\mathrm{Gum}}(x)=e^{-e^{-x}}</math>.
}}

This theorem refines the [[Circular law|circular law of the Ginibre ensemble]]. In words, the circular law says that the spectrum of <math>\frac{1}{\sqrt{n}} G_n</math> almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about <math>1-\sqrt{\frac{\gamma_n}{4 n}}</math>, and fluctuates on a scale of <math>\frac{1}{\sqrt{4 n \gamma_n}}</math>, according to the Gumbel law.

== Correlation functions ==

The joint probability density of the eigenvalues of <math>n\times n</math> random Hermitian matrices <math> M \in \mathbf{H}^{n \times n} </math>, with partition functions of the form
<math display="block">
Z_n = \int_{M \in \mathbf{H}^{n \times n}} d\mu_0(M)e^{\text{tr}(V(M))}
</math>
where
<math display="block">
V(x):=\sum_{j=1}^\infty v_j x^j
</math>
and <math> d\mu_0(M)</math> is the standard Lebesgue measure on the space <math> \mathbf{H}^{n \times n}</math> of Hermitian <math> n \times n </math> matrices, is given by
<math display="block">
p_{n,V}(x_1,\dots, x_n) = \frac{1}{Z_{n,V}}\prod_{i<j} (x_i-x_j)^2 e^{-\sum_i V(x_i)}.
</math>
The <math>k</math>-point correlation functions (or ''marginal distributions'')
are defined as
<math display="block">
R^{(k)}_{n,V}(x_1,\dots,x_k) = \frac{n!}{(n-k)!} \int_{\mathbf{R}}dx_{k+1} \cdots \int_{\R} dx_{n} \, p_{n,V}(x_1,x_2,\dots,x_n),
</math>
which are skew symmetric functions of their variables.
In particular, the one-point correlation function, or ''density of states'', is
<math display="block">
R^{(1)}_{n,V}(x_1) = n\int_{\R}dx_{2} \cdots \int_{\mathbf{R}} dx_{n} \, p_{n,V}(x_1,x_2,\dots,x_n).
</math>
Its integral over a Borel set <math>B \subset \mathbf{R}</math> gives the expected number of eigenvalues contained in <math>B</math>:
<math display="block">
\int_{B} R^{(1)}_{n,V}(x)dx = \mathbf{E}\left(\#\{\text{eigenvalues in }B\}\right).
</math>

The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs <math>(x_i, x_j)</math> of points appearing within the correlator.

'''Theorem''' [Dyson-Mehta]
For any <math>k</math>, <math>1\leq k \leq n</math> the <math>k</math>-point correlation function <math>R^{(k)}_{n,V}</math> can be written as a determinant
<math display="block">
R^{(k)}_{n,V}(x_1,x_2,\dots,x_k) = \det_{1\leq i,j \leq k}\left(K_{n,V}(x_i,x_j)\right),
</math>
where <math>K_{n,V}(x,y)</math> is the <math>n</math>th Christoffel-Darboux kernel
<math display="block">
K_{n,V}(x,y) := \sum_{k=0}^{n-1}\psi_k(x)\psi_k(y),
</math>
associated to <math>V</math>, written in terms of the quasipolynomials
<math display="block">
\psi_k(x) = {1\over \sqrt{h_k}}\, p_k(z)\, e^{- V(z) / 2} ,
</math>
where <math> \{p_k(x)\}_{k\in \mathbf{N}} </math> is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonilty conditions
<math display="block">
\int_{\mathbf{R}} \psi_j(x) \psi_k(x) dx = \delta_{jk}.
</math>


==Other classes of random matrices==
==Other classes of random matrices==
Line 209: Line 337:
===Wishart matrices===
===Wishart matrices===


{{main article|Wishart distribution}}
{{Main|Wishart distribution}}


''Wishart matrices'' are ''n&nbsp;×&nbsp;n'' random matrices of the form ''H''&nbsp;=&nbsp;''X''&nbsp;''X''<sup>*</sup>, where ''X'' is an ''n&nbsp;×&nbsp;m'' random matrix (''m''&nbsp;≥&nbsp;''n'') with independent entries, and ''X''<sup>*</sup> is its [[conjugate transpose]]. In the important special case considered by Wishart, the entries of ''X'' are identically distributed Gaussian random variables (either real or complex).
''Wishart matrices'' are ''n''&nbsp;×&nbsp;''n'' random matrices of the form {{math|1=''H'' = ''X'' ''X''<sup>*</sup>}}, where ''X'' is an ''n''&nbsp;×&nbsp;''m'' random matrix (''m''&nbsp;≥&nbsp;''n'') with independent entries, and ''X''<sup>*</sup> is its [[conjugate transpose]]. In the important special case considered by Wishart, the entries of ''X'' are identically distributed Gaussian random variables (either real or complex).


The limit of the empirical spectral measure of Wishart matrices was found<ref name=MP /> by [[Vladimir Marchenko]] and [[Leonid Pastur]], see [[Marchenko–Pastur distribution]].
The [[Marchenko–Pastur distribution|limit of the empirical spectral measure of Wishart matrices]] was found<ref name=MP /> by [[Vladimir Marchenko]] and [[Leonid Pastur]].


===Random unitary matrices===
===Random unitary matrices===
:''See [[circular ensembles]].''
{{main|Circular ensembles}}


===Non-Hermitian random matrices===
===Non-Hermitian random matrices===
{{main|Circular law}}
:''See [[circular law]].''

==Selected bibliography==

=== Books ===
* {{cite book|last=Mehta|first=M.L.| title=Random Matrices|year=2004| publisher=Elsevier/Academic Press |location=Amsterdam|isbn=0-12-088409-7}}
* {{cite book|last1=Anderson|first1=G.W.| last2=Guionnet|first2=A.|last3=Zeitouni|first3=O.|title=An introduction to random matrices.|year=2010|publisher=Cambridge University Press|location=Cambridge| isbn=978-0-521-19452-5}}
* {{cite book|last1=Akemann|first1=G.| last2=Baik|first2=J. |last3=Di Francesco|first3=P. | title=The Oxford Handbook of Random Matrix Theory |year=2011|publisher=Oxford University Press |location=Oxford| isbn=978-0-19-957400-1}}
* {{cite book |last1=Potters |first1=Marc |title=A First Course in Random Matrix Theory: for Physicists, Engineers and Data Scientists |last2=Bouchaud |first2=Jean-Philippe |date=2020-11-30 |publisher=Cambridge University Press |isbn=978-1-108-76890-0 |doi=10.1017/9781108768900}}

=== Survey articles ===
* {{cite journal|last1=Edelman|first1=A. |last2=Rao|first2=N.R|title=Random matrix theory| journal=Acta Numerica|year=2005| volume=14|pages=233–297 |doi=10.1017/S0962492904000236 |bibcode = 2005AcNum..14..233E |s2cid=16038147 }}
* {{cite journal|last=Pastur|first=L.A. | title=Spectra of random self-adjoint operators|journal=Russ. Math. Surv.| year=1973|volume=28|issue=1| pages=1–67 |doi=10.1070/RM1973v028n01ABEH001396 |bibcode = 1973RuMaS..28....1P |s2cid=250796916 }}
* {{cite journal | last1=Diaconis | first1=Persi | authorlink1=Persi Diaconis | title=Patterns in eigenvalues: the 70th Josiah Willard Gibbs lecture | mr=1962294 | year=2003 | journal=Bulletin of the American Mathematical Society |series=New Series | volume=40 | issue=2 | pages=155–178 | doi=10.1090/S0273-0979-03-00975-3| doi-access=free }}
* {{cite journal | last1=Diaconis | first1=Persi | authorlink1=Persi Diaconis | title=What is ... a random matrix? | url=https://www.ams.org/notices/200511/ | mr=2183871 | year=2005 | journal=[[Notices of the American Mathematical Society]] | issn=0002-9920 | volume=52 | issue=11 | pages=1348–1349}}
* {{Cite arXiv| last1=Eynard | first1=Bertrand | last2=Kimura | first2=Taro | last3=Ribault | first3=Sylvain | title=Random matrices | date=2015-10-15 | class=math-ph | eprint=1510.04430v2 }}


==Guide to references==
=== Historic works ===


* {{cite journal|last=Wigner|first=E.| title=Characteristic vectors of bordered matrices with infinite dimensions |journal=Annals of Mathematics |year=1955 |volume=62 |pages=548–564 | doi=10.2307/1970079| issue=3|jstor=1970079}}
* Books on random matrix theory:<ref name=mehta /><ref>{{cite book|last=Anderson|first=G.W.|last2=Guionnet|first2=A.|last3=Zeitouni|first3=O.|title=An introduction to random matrices.|year=2010|publisher=Cambridge University Press|location=Cambridge|isbn=978-0-521-19452-5}}</ref><ref>{{cite book|last=Akemann|first=G.|last2=Baik|first2=J.|last3=Di Francesco|first3=P.|title=The Oxford Handbook of Random Matrix Theory.|year=2011|publisher=Oxford University Press|location=Oxford|isbn=978-0-19-957400-1}}</ref>
* {{cite journal|last=Wishart|first=J.| title=Generalized product moment distribution in samples|journal=Biometrika| year=1928|volume=20A|issue=1–2|pages=32–52|doi=10.1093/biomet/20a.1-2.32}}
* Survey articles on random matrix theory:<ref name = er /><ref name = pastur72 /><ref>{{cite journal | last1=Diaconis | first1=Persi | authorlink1=Persi Diaconis | title=Patterns in eigenvalues: the 70th Josiah Willard Gibbs lecture | mr=1962294 | year=2003 | journal=American Mathematical Society. Bulletin. New Series | volume=40 | issue=2 | pages=155–178 | doi=10.1090/S0273-0979-03-00975-3| doi-access=free }}</ref><ref>{{cite journal | last1=Diaconis | first1=Persi | authorlink1=Persi Diaconis | title=What is ... a random matrix? | url=http://www.ams.org/notices/200511/ | mr=2183871 | year=2005 | journal=[[Notices of the American Mathematical Society]] | issn=0002-9920 | volume=52 | issue=11 | pages=1348–1349}}</ref>
* {{cite journal|last1=von Neumann|first1=J. | last2=Goldstine|first2=H.H. |title=Numerical inverting of matrices of high order |journal=Bull. Amer. Math. Soc. |year=1947|volume=53|pages=1021–1099 | doi=10.1090/S0002-9904-1947-08909-6| issue=11|doi-access=free}}
* Historic works:<ref name=wigner/><ref name=wishart/><ref name=vng/>


==References==
== References ==
{{Reflist|30em}}
{{Reflist|30em}}


Line 235: Line 378:


{{Matrix classes}}
{{Matrix classes}}
{{Authority control}}


{{DEFAULTSORT:Random Matrix}}
{{DEFAULTSORT:Random Matrix}}
Line 240: Line 384:
[[Category:Random matrices| ]]
[[Category:Random matrices| ]]
[[Category:Mathematical physics]]
[[Category:Mathematical physics]]
[[Category:Probability theory]]

Latest revision as of 12:21, 14 June 2024

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms,[1][2] the thermal conductivity of a lattice, or the emergence of quantum chaos,[3] can be modeled mathematically as problems concerning large, random matrices.

Applications

[edit]

Physics

[edit]

In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms.[1][2] Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[4] In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation.

In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.[3]

In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the boson sampling model).[5] Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is beam splitters and phase shifters).[6]

Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics,[7] quantum gravity in two dimensions,[8] mesoscopic physics,[9] spin-transfer torque,[10] the fractional quantum Hall effect,[11] Anderson localization,[12] quantum dots,[13] and superconductors[14]

Mathematical statistics and numerical analysis

[edit]

In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples.[15] Chernoff-, Bernstein-, and Hoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue (i.e. the eigenvalue of largest magnitude) of a finite sum of random Hermitian matrices.[16] Random matrix theory is used to study the spectral properties of random matrices—such as sample covariance matrices—which is of particular interest in high-dimensional statistics. Random matrix theory also saw applications in neuronal networks[17] and deep learning, with recent work utilizing random matrices to show that hyper-parameter tunings can be cheaply transferred between large neural networks without the need for re-training.[18]

In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Goldstine[19] to describe computation errors in operations such as matrix multiplication. Although random entries are traditional "generic" inputs to an algorithm, the concentration of measure associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.[20]

Number theory

[edit]

In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modeled by the distribution of eigenvalues of certain random matrices.[21] The connection was first discovered by Hugh Montgomery and Freeman Dyson. It is connected to the Hilbert–Pólya conjecture.

Free probability

[edit]

The relation of free probability with random matrices[22] is a key reason for the wide use of free probability in other subjects. Voiculescu introduced the concept of freeness around 1983 in an operator algebraic context; at the beginning there was no relation at all with random matrices. This connection was only revealed later in 1991 by Voiculescu;[23] he was motivated by the fact that the limit distribution which he found in his free central limit theorem had appeared before in Wigner's semi-circle law in the random matrix context.

Computational neuroscience

[edit]

In the field of computational neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos[24] when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation[25][26] and time to synchrony depends on network topology.[27][28]

In the analysis of massive data such as fMRI, random matrix theory has been applied in order to perform dimension reduction. When applying an algorithm such as PCA, it is important to be able to select the number of significant components. The criteria for selecting components can be multiple (based on explained variance, Kaiser's method, eigenvalue, etc.). Random matrix theory in this content has its representative the Marchenko-Pastur distribution, which guarantees the theoretical high and low limits of the eigenvalues associated with a random variable covariance matrix. This matrix calculated in this way becomes the null hypothesis that allows one to find the eigenvalues (and their eigenvectors) that deviate from the theoretical random range. The components thus excluded become the reduced dimensional space (see examples in fMRI [29][30]).

Optimal control

[edit]

In optimal control theory, the evolution of n state variables through time depends at any time on their own values and on the values of k control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of stochastic control.[31]: ch. 13 [32] A key result in the case of linear-quadratic control with stochastic matrices is that the certainty equivalence principle does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.

Computational mechanics

[edit]

In computational mechanics, epistemic uncertainties underlying the lack of knowledge about the physics of the modeled system give rise to mathematical operators associated with the computational model, which are deficient in a certain sense. Such operators lack certain properties linked to unmodeled physics. When such operators are discretized to perform computational simulations, their accuracy is limited by the missing physics. To compensate for this deficiency of the mathematical operator, it is not enough to make the model parameters random, it is necessary to consider a mathematical operator that is random and can thus generate families of computational models in the hope that one of these captures the missing physics. Random matrices have been used in this sense,[33] with applications in vibroacoustics, wave propagations, materials science, fluid mechanics, heat transfer, etc.

Engineering

[edit]

Random matrix theory can be applied to the electrical and communications engineering research efforts to study, model and develop Massive Multiple-Input Multiple-Output (MIMO) radio systems.[citation needed]

History

[edit]

Random matrix theory first gained attention beyond mathematics literature in the context of nuclear physics. Experiments by Enrico Fermi and others demonstrated evidence that individual nucleons cannot be approximated to move independently, leading Niels Bohr to formulate the idea of a compound nucleus. Because there was no knowledge of direct nucleon-nucleon interactions, Eugene Wigner and Leonard Eisenbud approximated that the nuclear Hamiltonian could be modeled as a random matrix. For larger atoms, the distribution of the energy eigenvalues of the Hamiltonian could be computed in order to approximate scattering cross sections by invoking the Wishart distribution.[34]

Gaussian ensembles

[edit]

The most-commonly studied random matrix distributions are the Gaussian ensembles: GOE, GUE and GSE. They are often denoted by their Dyson index, β = 1 for GOE, β = 2 for GUE, and β = 4 for GSE. This index counts the number of real components per matrix element.

Definitions

[edit]

The Gaussian unitary ensemble is described by the Gaussian measure with density on the space of Hermitian matrices . Here is a normalization constant, chosen so that the integral of the density is equal to one. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry.

The Gaussian orthogonal ensemble is described by the Gaussian measure with density on the space of n × n real symmetric matrices H = (Hij)n
i,j=1
. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. Equivalently, it is generated by , where is an matrix with IID samples from the standard normal distribution.

The Gaussian symplectic ensemble is described by the Gaussian measure with density on the space of n × n Hermitian quaternionic matrices, e.g. symmetric square matrices composed of quaternions, H = (Hij)n
i,j=1
. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.

Point correlation functions

[edit]

The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨Hij⟩ = 0, and two-point correlations given by from which all higher correlations follow by Isserlis' theorem.

Moment generating functions

[edit]

The moment generating function for the GOE iswhere is the Frobenius norm.

Spectral density

[edit]
Spectral density of GOE/GUE/GSE, as . They are normalized so that the distributions converge to the semicircle distribution. The number of "humps" is equal to N.

The joint probability density for the eigenvalues λ1, λ2, ..., λn of GUE/GOE/GSE is given by

(1)

where Zβ,n is a normalization constant which can be explicitly computed, see Selberg integral. In the case of GUE (β = 2), the formula (1) describes a determinantal point process. Eigenvalues repel as the joint probability density has a zero (of th order) for coinciding eigenvalues .

The distribution of the largest eigenvalue for GOE, and GUE, are explicitly solvable.[35] They converge to the Tracy–Widom distribution after shifting and scaling appropriately.

Convergence to Wigner semicircular distribution

[edit]

The spectrum, divided by , converges in distribution to the semicircular distribution on the interval : . Here is the variance of off-diagonal entries. The variance of the on-diagonal entries do not matter.

Distribution of level spacings

[edit]

From the ordered sequence of eigenvalues , one defines the normalized spacings , where is the mean spacing. The probability distribution of spacings is approximately given by, for the orthogonal ensemble GOE , for the unitary ensemble GUE , and for the symplectic ensemble GSE .

The numerical constants are such that is normalized: and the mean spacing is, for .

Generalizations

[edit]

Wigner matrices are random Hermitian matrices such that the entries above the main diagonal are independent random variables with zero mean and have identical second moments.

Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/Hermitian/quaternionic Hermitian matrices, which is of the form where the function V is called the potential.

The Gaussian ensembles are the only common special cases of these two classes of random matrices. This is a consequence of a theorem by Porter and Rosenzweig.[36][37]

Spectral theory of random matrices

[edit]

The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity. [38]

Empirical spectral measure

[edit]

The empirical spectral measure μH of H is defined by

Usually, the limit of is a deterministic measure; this is a particular case of self-averaging. The cumulative distribution function of the limiting measure is called the integrated density of states and is denoted N(λ). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ρ(λ).

Alternative expressions

[edit]

Types of convergence

[edit]

Given a matrix ensemble, we say that its spectral measures converge weakly to iff for any measurable set , the ensemble-average converges:Convergence weakly almost surely: If we sample independently from the ensemble, then with probability 1,for any measurable set .

In another sense, weak almost sure convergence means that we sample , not independently, but by "growing" (a stochastic process), then with probability 1, for any measurable set .

For example, we can "grow" a sequence of matrices from the Gaussian ensemble as follows:

  • Sample an infinite doubly infinite sequence of standard random variables .
  • Define each where is the matrix made of entries .

Note that generic matrix ensembles do not allow us to grow, but most of the common ones, such as the three Gaussian ensembles, do allow us to grow.

Global regime

[edit]

In the global regime, one is interested in the distribution of linear statistics of the form .

The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner; see Wigner semicircle distribution and Wigner surmise. As far as sample covariance matrices are concerned, a theory was developed by Marčenko and Pastur.[39][40]

The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory.[41]

Fluctuations

[edit]

For the linear statistics Nf,H = n−1 Σ f(λj), one is also interested in the fluctuations about ∫ f(λdN(λ). For many classes of random matrices, a central limit theorem of the form is known.[42][43]

The variational problem for the unitary ensembles

[edit]

Consider the measure

where is the potential of the ensemble and let be the empirical spectral measure.

We can rewrite with as

the probability measure is now of the form

where is the above functional inside the squared brackets.

Let now

be the space of one-dimensional probability measures and consider the minimizer

For there exists a unique equilibrium measure through the Euler-Lagrange variational conditions for some real constant

where is the support of the measure and define

.

The equilibrium measure has the following Radon–Nikodym density

[44]

Mesoscopic regime

[edit]

[45][46] The typical statement of the Wigner semicircular law is equivalent to the following statement: For each fixed interval centered at a point , as , the number of dimensions of the gaussian ensemble increases, the proportion of the eigenvalues falling within the interval converges to , where is the density of the semicircular distribution.

If can be allowed to decrease as increases, then we obtain strictly stronger theorems, named "local laws" or "mesoscopic regime".

The mesoscopic regime is intermediate between the local and the global. In the mesoscopic regime, one is interested in the limit distribution of eigenvalues in a set that shrinks to zero, but slow enough, such that the number of eigenvalues inside .

For example, the Ginibre ensemble has a mesoscopic law: For any sequence of shrinking disks with areas inside the unite disk, if the disks have area , the conditional distribution of the spectrum inside the disks also converges to a uniform distribution. That is, if we cut the shrinking disks along with the spectrum falling inside the disks, then scale the disks up to unit area, we would see the spectra converging to a flat distribution in the disks.[46]

Local regime

[edit]

In the local regime, one is interested in the limit distribution of eigenvalues in a set that shrinks so fast that the number of eigenvalues remains .

Typically this means the study of spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervals inside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of the support.

Bulk statistics

[edit]

Formally, fix in the interior of the support of . Then consider the point process where are the eigenvalues of the random matrix.

The point process captures the statistical properties of eigenvalues in the vicinity of . For the Gaussian ensembles, the limit of is known;[4] thus, for GUE it is a determinantal point process with the kernel (the sine kernel).

The universality principle postulates that the limit of as should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on ). Rigorous proofs of universality are known for invariant matrix ensembles[47][48] and Wigner matrices.[49][50]

Edge statistics

[edit]

One example of edge statistics is the Tracy–Widom distribution.

As another example, consider the Ginibre ensemble. It can be real or complex. The real Ginibre ensemble has i.i.d. standard Gaussian entries , and the complex Ginibre ensemble has i.i.d. standard complex Gaussian entries .

Now let be sampled from the real or complex ensemble, and let be the absolute value of its maximal eigenvalue:We have the following theorem for the edge statistics:[51]

Edge statistics of the Ginibre ensemble — For and as above, with probability one,

Moreover, if and then converges in distribution to the Gumbel law, i.e., the probability measure on with cumulative distribution function .

This theorem refines the circular law of the Ginibre ensemble. In words, the circular law says that the spectrum of almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about , and fluctuates on a scale of , according to the Gumbel law.

Correlation functions

[edit]

The joint probability density of the eigenvalues of random Hermitian matrices , with partition functions of the form where and is the standard Lebesgue measure on the space of Hermitian matrices, is given by The -point correlation functions (or marginal distributions) are defined as which are skew symmetric functions of their variables. In particular, the one-point correlation function, or density of states, is Its integral over a Borel set gives the expected number of eigenvalues contained in :

The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs of points appearing within the correlator.

Theorem [Dyson-Mehta] For any , the -point correlation function can be written as a determinant where is the th Christoffel-Darboux kernel associated to , written in terms of the quasipolynomials where is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonilty conditions

Other classes of random matrices

[edit]

Wishart matrices

[edit]

Wishart matrices are n × n random matrices of the form H = X X*, where X is an n × m random matrix (m ≥ n) with independent entries, and X* is its conjugate transpose. In the important special case considered by Wishart, the entries of X are identically distributed Gaussian random variables (either real or complex).

The limit of the empirical spectral measure of Wishart matrices was found[39] by Vladimir Marchenko and Leonid Pastur.

Random unitary matrices

[edit]

Non-Hermitian random matrices

[edit]

Selected bibliography

[edit]

Books

[edit]
  • Mehta, M.L. (2004). Random Matrices. Amsterdam: Elsevier/Academic Press. ISBN 0-12-088409-7.
  • Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010). An introduction to random matrices. Cambridge: Cambridge University Press. ISBN 978-0-521-19452-5.
  • Akemann, G.; Baik, J.; Di Francesco, P. (2011). The Oxford Handbook of Random Matrix Theory. Oxford: Oxford University Press. ISBN 978-0-19-957400-1.
  • Potters, Marc; Bouchaud, Jean-Philippe (2020-11-30). A First Course in Random Matrix Theory: for Physicists, Engineers and Data Scientists. Cambridge University Press. doi:10.1017/9781108768900. ISBN 978-1-108-76890-0.

Survey articles

[edit]

Historic works

[edit]

References

[edit]
  1. ^ a b Wigner, Eugene P. (1955). "Characteristic Vectors of Bordered Matrices With Infinite Dimensions". Annals of Mathematics. 62 (3): 548–564. doi:10.2307/1970079. ISSN 0003-486X. JSTOR 1970079.
  2. ^ a b Block, R. C.; Good, W. M.; Harvey, J. A.; Schmitt, H. W.; Trammell, G. T., eds. (1957-07-01). Conference on Neutron Physics by Time-Of-Flight Held at Gatlinburg, Tennessee, November 1 and 2, 1956 (Report ORNL-2309). Oak Ridge, Tennessee: Oak Ridge National Lab. doi:10.2172/4319287. OSTI 4319287.
  3. ^ a b Bohigas, O.; Giannoni, M.J.; Schmit, Schmit (1984). "Characterization of Chaotic Quantum Spectra and Universality of Level Fluctuation Laws". Phys. Rev. Lett. 52 (1): 1–4. Bibcode:1984PhRvL..52....1B. doi:10.1103/PhysRevLett.52.1.
  4. ^ a b Mehta 2004
  5. ^ Aaronson, Scott; Arkhipov, Alex (2013). "The computational complexity of linear optics". Theory of Computing. 9: 143–252. doi:10.4086/toc.2013.v009a004.
  6. ^ Russell, Nicholas; Chakhmakhchyan, Levon; O'Brien, Jeremy; Laing, Anthony (2017). "Direct dialling of Haar random unitary matrices". New J. Phys. 19 (3): 033007. arXiv:1506.06220. Bibcode:2017NJPh...19c3007R. doi:10.1088/1367-2630/aa60ed. S2CID 46915633.
  7. ^ Verbaarschot JJ, Wettig T (2000). "Random Matrix Theory and Chiral Symmetry in QCD". Annu. Rev. Nucl. Part. Sci. 50: 343–410. arXiv:hep-ph/0003017. Bibcode:2000ARNPS..50..343V. doi:10.1146/annurev.nucl.50.1.343. S2CID 119470008.
  8. ^ Franchini F, Kravtsov VE (October 2009). "Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms". Phys. Rev. Lett. 103 (16): 166401. arXiv:0905.3533. Bibcode:2009PhRvL.103p6401F. doi:10.1103/PhysRevLett.103.166401. PMID 19905710. S2CID 11122957.
  9. ^ Sánchez D, Büttiker M (September 2004). "Magnetic-field asymmetry of nonlinear mesoscopic transport". Phys. Rev. Lett. 93 (10): 106802. arXiv:cond-mat/0404387. Bibcode:2004PhRvL..93j6802S. doi:10.1103/PhysRevLett.93.106802. PMID 15447435. S2CID 11686506.
  10. ^ Rychkov VS, Borlenghi S, Jaffres H, Fert A, Waintal X (August 2009). "Spin torque and waviness in magnetic multilayers: a bridge between Valet-Fert theory and quantum approaches". Phys. Rev. Lett. 103 (6): 066602. arXiv:0902.4360. Bibcode:2009PhRvL.103f6602R. doi:10.1103/PhysRevLett.103.066602. PMID 19792592. S2CID 209013.
  11. ^ Callaway DJE (April 1991). "Random matrices, fractional statistics, and the quantum Hall effect". Phys. Rev. B. 43 (10): 8641–8643. Bibcode:1991PhRvB..43.8641C. doi:10.1103/PhysRevB.43.8641. PMID 9996505.
  12. ^ Janssen M, Pracz K (June 2000). "Correlated random band matrices: localization-delocalization transitions". Phys. Rev. E. 61 (6 Pt A): 6278–86. arXiv:cond-mat/9911467. Bibcode:2000PhRvE..61.6278J. doi:10.1103/PhysRevE.61.6278. PMID 11088301. S2CID 34140447.
  13. ^ Zumbühl DM, Miller JB, Marcus CM, Campman K, Gossard AC (December 2002). "Spin-orbit coupling, antilocalization, and parallel magnetic fields in quantum dots". Phys. Rev. Lett. 89 (27): 276803. arXiv:cond-mat/0208436. Bibcode:2002PhRvL..89A6803Z. doi:10.1103/PhysRevLett.89.276803. PMID 12513231. S2CID 9344722.
  14. ^ Bahcall SR (December 1996). "Random Matrix Model for Superconductors in a Magnetic Field". Phys. Rev. Lett. 77 (26): 5276–5279. arXiv:cond-mat/9611136. Bibcode:1996PhRvL..77.5276B. doi:10.1103/PhysRevLett.77.5276. PMID 10062760. S2CID 206326136.
  15. ^ Wishart 1928
  16. ^ Tropp, J. (2011). "User-Friendly Tail Bounds for Sums of Random Matrices". Foundations of Computational Mathematics. 12 (4): 389–434. arXiv:1004.4389. doi:10.1007/s10208-011-9099-z. S2CID 17735965.
  17. ^ Pennington, Jeffrey; Bahri, Yasaman (2017). "Geometry of Neural Network Loss Surfaces via Random Matrix Theory". ICML'17: Proceedings of the 34th International Conference on Machine Learning. 70. S2CID 39515197.
  18. ^ Yang, Greg (2022). "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer". arXiv:2203.03466v2 [cs.LG].
  19. ^ von Neumann & Goldstine 1947
  20. ^ Edelman & Rao 2005
  21. ^ Keating, Jon (1993). "The Riemann zeta-function and quantum chaology". Proc. Internat. School of Phys. Enrico Fermi. CXIX: 145–185. doi:10.1016/b978-0-444-81588-0.50008-0. ISBN 9780444815880.
  22. ^ Mingo, James A.; Speicher, Roland (2017): Free Probability and Random Matrices. Fields Institute Monographs, Vol. 35, Springer, New York
  23. ^ Voiculescu, Dan (1991): "Limit laws for random matrices and free products". Inventiones mathematicae 104.1: 201-220
  24. ^ Sompolinsky, H.; Crisanti, A.; Sommers, H. (July 1988). "Chaos in Random Neural Networks". Physical Review Letters. 61 (3): 259–262. Bibcode:1988PhRvL..61..259S. doi:10.1103/PhysRevLett.61.259. PMID 10039285. S2CID 16967637.
  25. ^ Rajan, Kanaka; Abbott, L. (November 2006). "Eigenvalue Spectra of Random Matrices for Neural Networks". Physical Review Letters. 97 (18): 188104. Bibcode:2006PhRvL..97r8104R. doi:10.1103/PhysRevLett.97.188104. PMID 17155583.
  26. ^ Wainrib, Gilles; Touboul, Jonathan (March 2013). "Topological and Dynamical Complexity of Random Neural Networks". Physical Review Letters. 110 (11): 118101. arXiv:1210.5082. Bibcode:2013PhRvL.110k8101W. doi:10.1103/PhysRevLett.110.118101. PMID 25166580. S2CID 1188555.
  27. ^ Timme, Marc; Wolf, Fred; Geisel, Theo (February 2004). "Topological Speed Limits to Network Synchronization". Physical Review Letters. 92 (7): 074101. arXiv:cond-mat/0306512. Bibcode:2004PhRvL..92g4101T. doi:10.1103/PhysRevLett.92.074101. PMID 14995853. S2CID 5765956.
  28. ^ Muir, Dylan; Mrsic-Flogel, Thomas (2015). "Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks" (PDF). Phys. Rev. E. 91 (4): 042808. Bibcode:2015PhRvE..91d2808M. doi:10.1103/PhysRevE.91.042808. PMID 25974548.
  29. ^ Vergani, Alberto A.; Martinelli, Samuele; Binaghi, Elisabetta (July 2019). "Resting state fMRI analysis using unsupervised learning algorithms". Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 8 (3). Taylor&Francis: 2168–1171. doi:10.1080/21681163.2019.1636413.
  30. ^ Burda, Z; Kornelsen, J; Nowak, MA; Porebski, B; Sboto-Frankenstein, U; Tomanek, B; Tyburczyk, J (2013). "Collective Correlations of Brodmann Areas fMRI Study with RMT-Denoising". Acta Physica Polonica B. 44 (6): 1243. arXiv:1306.3825. Bibcode:2013AcPPB..44.1243B. doi:10.5506/APhysPolB.44.1243.
  31. ^ Chow, Gregory P. (1976). Analysis and Control of Dynamic Economic Systems. New York: Wiley. ISBN 0-471-15616-7.
  32. ^ Turnovsky, Stephen (1974). "The stability properties of optimal economic policies". American Economic Review. 64 (1): 136–148. JSTOR 1814888.
  33. ^ Soize, C. (2005-04-08). "Random matrix theory for modeling uncertainties in computational mechanics" (PDF). Computer Methods in Applied Mechanics and Engineering. 194 (12–16): 1333–1366. Bibcode:2005CMAME.194.1333S. doi:10.1016/j.cma.2004.06.038. ISSN 1879-2138. S2CID 58929758.
  34. ^ Bohigas, Oriol; Weidenmuller, Hans (2015). Akemann, Gernot; Baik, Jinho; Di Francesco, Philippe (eds.). "History – an overview". academic.oup.com. pp. 15–40. doi:10.1093/oxfordhb/9780198744191.013.2. ISBN 978-0-19-874419-1. Retrieved 2024-04-22.
  35. ^ Chiani M (2014). "Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the Tracy-Widom distribution". Journal of Multivariate Analysis. 129: 69–81. arXiv:1209.3394. doi:10.1016/j.jmva.2014.04.002. S2CID 15889291.
  36. ^ Porter, C. E.; Rosenzweig, N. (1960-01-01). "STATISTICAL PROPERTIES OF ATOMIC AND NUCLEAR SPECTRA". Ann. Acad. Sci. Fennicae. Ser. A VI. 44. OSTI 4147616.
  37. ^ Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo (2018), Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo (eds.), "Classified Material", Introduction to Random Matrices: Theory and Practice, SpringerBriefs in Mathematical Physics, vol. 26, Cham: Springer International Publishing, pp. 15–21, doi:10.1007/978-3-319-70885-0_3, ISBN 978-3-319-70885-0, retrieved 2023-05-17
  38. ^ Meckes, Elizabeth (2021-01-08). "The Eigenvalues of Random Matrices". arXiv:2101.02928 [math.PR].
  39. ^ a b .Marčenko, V A; Pastur, L A (1967). "Distribution of eigenvalues for some sets of random matrices". Mathematics of the USSR-Sbornik. 1 (4): 457–483. Bibcode:1967SbMat...1..457M. doi:10.1070/SM1967v001n04ABEH001994.
  40. ^ Pastur 1973
  41. ^ Pastur, L.; Shcherbina, M. (1995). "On the Statistical Mechanics Approach in the Random Matrix Theory: Integrated Density of States". J. Stat. Phys. 79 (3–4): 585–611. Bibcode:1995JSP....79..585D. doi:10.1007/BF02184872. S2CID 120731790.
  42. ^ Johansson, K. (1998). "On fluctuations of eigenvalues of random Hermitian matrices". Duke Math. J. 91 (1): 151–204. doi:10.1215/S0012-7094-98-09108-6.
  43. ^ Pastur, L.A. (2005). "A simple approach to the global regime of Gaussian ensembles of random matrices". Ukrainian Math. J. 57 (6): 936–966. doi:10.1007/s11253-005-0241-4. S2CID 121531907.
  44. ^ Harnad, John (15 July 2013). Random Matrices, Random Processes and Integrable Systems. Springer. pp. 263–266. ISBN 978-1461428770.
  45. ^ Erdős, László; Schlein, Benjamin; Yau, Horng-Tzer (April 2009). "Local Semicircle Law and Complete Delocalization for Wigner Random Matrices". Communications in Mathematical Physics. 287 (2): 641–655. arXiv:0803.0542. Bibcode:2009CMaPh.287..641E. doi:10.1007/s00220-008-0636-9. ISSN 0010-3616.
  46. ^ a b Bourgade, Paul; Yau, Horng-Tzer; Yin, Jun (2014-08-01). "Local circular law for random matrices". Probability Theory and Related Fields. 159 (3): 545–595. arXiv:1206.1449. doi:10.1007/s00440-013-0514-z. ISSN 1432-2064.
  47. ^ Pastur, L.; Shcherbina, M. (1997). "Universality of the local eigenvalue statistics for a class of unitary invariant random matrix ensembles". Journal of Statistical Physics. 86 (1–2): 109–147. Bibcode:1997JSP....86..109P. doi:10.1007/BF02180200. S2CID 15117770.
  48. ^ Deift, P.; Kriecherbauer, T.; McLaughlin, K.T.-R.; Venakides, S.; Zhou, X. (1997). "Asymptotics for polynomials orthogonal with respect to varying exponential weights". International Mathematics Research Notices. 1997 (16): 759–782. doi:10.1155/S1073792897000500.
  49. ^ Erdős, L.; Péché, S.; Ramírez, J.A.; Schlein, B.; Yau, H.T. (2010). "Bulk universality for Wigner matrices". Communications on Pure and Applied Mathematics. 63 (7): 895–925. arXiv:0905.4176. doi:10.1002/cpa.20317.
  50. ^ Tao, Terence; Vu, Van H. (2010). "Random matrices: universality of local eigenvalue statistics up to the edge". Communications in Mathematical Physics. 298 (2): 549–572. arXiv:0908.1982. Bibcode:2010CMaPh.298..549T. doi:10.1007/s00220-010-1044-5. S2CID 16594369.
  51. ^ Rider, B (2003-03-28). "A limit theorem at the edge of a non-Hermitian random matrix ensemble". Journal of Physics A: Mathematical and General. 36 (12): 3401–3409. Bibcode:2003JPhA...36.3401R. doi:10.1088/0305-4470/36/12/331. ISSN 0305-4470.
[edit]