Jump to content

Swendsen–Wang algorithm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Yanru pei (talk | contribs)
m fixed a typo in the ergodicity discussion
m v2.05b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation)
 
(8 intermediate revisions by 7 users not shown)
Line 1: Line 1:
The '''Swendsen–Wang algorithm''' is the first non-local or cluster [[algorithm]] for [[Monte Carlo simulation]] for large systems near [[Critical point (thermodynamics)|criticality]]. It has been introduced by [[Robert Swendsen]] and [[Jian-Sheng Wang]] in 1987 at [[Carnegie Mellon University|Carnegie Mellon]].
The '''Swendsen–Wang algorithm''' is the first non-local or cluster [[algorithm]] for [[Monte Carlo simulation]] for large systems near [[Critical point (thermodynamics)|criticality]]. It has been introduced by [[Robert Swendsen]] and [[Jian-Sheng Wang]] in 1987 at [[Carnegie Mellon University|Carnegie Mellon]].


The original algorithm was designed for the [[Ising model|Ising]] and Potts models, and it was later generalized to other systems as well, such as the XY model by [[Wolff algorithm]] and particles of fluids. The key ingredient was the [[random cluster model]], a representation of the Ising or [[Potts model|Potts]] model through percolation models of connecting bonds, due to Fortuin and Kasteleyn. It has been generalized by Barbu and Zhu<ref>{{Cite journal|last=Barbu|first=Adrian|last2=Zhu|first2=Song-Chun|date=2005-08|title=Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities|url=https://pubmed.ncbi.nlm.nih.gov/16119263/|journal=IEEE transactions on pattern analysis and machine intelligence|volume=27|issue=8|pages=1239–1253|doi=10.1109/TPAMI.2005.161|issn=0162-8828|pmid=16119263}}</ref> to arbitrary sampling probabilities by viewing it as a [[Metropolis–Hastings algorithm]] and computing the acceptance probability of the proposed Monte Carlo move.
The original algorithm was designed for the [[Ising model|Ising]] and Potts models, and it was later generalized to other systems as well, such as the XY model by [[Wolff algorithm]] and particles of fluids. The key ingredient was the [[random cluster model]], a representation of the Ising or [[Potts model|Potts]] model through percolation models of connecting bonds, due to Fortuin and Kasteleyn. It has been generalized by Barbu and Zhu<ref>{{Cite journal|last1=Barbu|first1=Adrian|last2=Zhu|first2=Song-Chun|date=August 2005|title=Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities|url=https://pubmed.ncbi.nlm.nih.gov/16119263/|journal=IEEE Transactions on Pattern Analysis and Machine Intelligence|volume=27|issue=8|pages=1239–1253|doi=10.1109/TPAMI.2005.161|issn=0162-8828|pmid=16119263|s2cid=410716}}</ref> to arbitrary sampling probabilities by viewing it as a [[Metropolis–Hastings algorithm]] and computing the acceptance probability of the proposed Monte Carlo move.


== Motivation ==
== Motivation ==
The problem of the critical slowing-down affecting local processes is of fundamental importance in the study of second-order [[phase transition]]s (like ferromagnetic transition in the [[Ising model]]), as increasing the size of the system in order to reduce finite-size effects has the disadvantage of requiring a far larger number of moves to reach thermal equilibrium. Indeed the correlation time <math>\tau</math> usually increases as <math>L^z</math> with <math>z\simeq 2</math> or greater; since, to be accurate, the simulation time must be <math>t\gg\tau</math>, this is a major limitation in the size of the systems that can be studied through local algorithms. SW algorithm was the first to produce unusually small values for the dynamical critical exponents: <math>z=0.35</math> for the 2D Ising model (<math>z=2.125</math> for standard simulations); <math>z=0.75</math> for the 3D Ising model, as opposed to <math>z=2.0</math> for standard simulations.
The problem of the critical slowing-down affecting local processes is of fundamental importance in the study of second-order [[phase transition]]s (like ferromagnetic transition in the [[Ising model]]), as increasing the size of the system in order to reduce finite-size effects has the disadvantage of requiring a far larger number of moves to reach thermal equilibrium. Indeed the correlation time <math>\tau</math> usually increases as <math>L^z</math> with <math>z\simeq 2</math> or greater; since, to be accurate, the simulation time must be <math>t\gg\tau</math>, this is a major limitation in the size of the systems that can be studied through [[Local algorithm|local algorithms]]. SW algorithm was the first to produce unusually small values for the dynamical critical exponents: <math>z=0.35</math> for the 2D Ising model (<math>z=2.125</math> for standard simulations); <math>z=0.75</math> for the 3D Ising model, as opposed to <math>z=2.0</math> for standard simulations.


== Description ==
== Description ==
{{Main|Random cluster model}}
{{Main|Random cluster model}}
The algorithm is non-local in the sense that a single sweep updates a collection of spin variables based on the [[Random cluster model|Fortuin-Kasteleyn representation]]. The update is done on a "cluster" of spin variables connected by open bond variables that are generated through a [[Percolation theory|percolation]] process, based on the interaction states of the spins.
The algorithm is non-local in the sense that a single sweep updates a collection of spin variables based on the [[Random cluster model|Fortuin–Kasteleyn representation]]. The update is done on a "cluster" of spin variables connected by open bond variables that are generated through a [[Percolation theory|percolation]] process, based on the interaction states of the spins.


Consider a typical ferromagnetic Ising model with only nearest-neighbor interaction.
Consider a typical ferromagnetic Ising model with only nearest-neighbor interaction.
Line 53: Line 53:


== Correctness ==
== Correctness ==
It can be shown that this algorithm leads to equilibrium configurations. To show this, we interpret the algorithm as a [[Markov chain]], and show that the chain is both [[Ergodicity|ergodic]] (when used together with other algorithms) and satisfies [[detailed balance]], such that the equilibrium [[Boltzmann distribution]] is equal to the [[stationary distribution]] of the chain.
It can be shown that this algorithm leads to equilibrium configurations. To show this, we interpret the algorithm as a [[Markov chain]], and show that the chain is both [[Ergodicity|ergodic]] (when used together with other algorithms) and satisfies [[detailed balance]], such that the equilibrium [[Boltzmann distribution]] is equal to the [[stationary distribution]] of the chain.


Ergodicity means that it is possible to transit from any initial state to any final state with a finite number of updates. It has been shown that the SW algorithm is not ergodic in general (in the thermodynamic limit).<ref>{{Cite journal|last=Gore|first=Vivek K.|last2=Jerrum|first2=Mark R.|date=1999-10-01|title=The Swendsen–Wang Process Does Not Always Mix Rapidly|url=https://doi.org/10.1023/A:1004610900745|journal=Journal of Statistical Physics|language=en|volume=97|issue=1|pages=67–86|doi=10.1023/A:1004610900745|issn=1572-9613}}</ref> Thus in practice, the SW algorithm is usually used in conjunction with single spin-flip algorithms such as the Metropolis-Hastings algorithm to achieve ergodicity.
Ergodicity means that it is possible to transit from any initial state to any final state with a finite number of updates. It has been shown that the SW algorithm is not ergodic in general (in the thermodynamic limit).<ref>{{Cite journal|last1=Gore|first1=Vivek K.|last2=Jerrum|first2=Mark R.|date=1999-10-01|title=The Swendsen–Wang Process Does Not Always Mix Rapidly|url=https://doi.org/10.1023/A:1004610900745|journal=Journal of Statistical Physics|language=en|volume=97|issue=1|pages=67–86|doi=10.1023/A:1004610900745|bibcode=1999JSP....97...67G|s2cid=189821827|issn=1572-9613}}</ref> Thus in practice, the SW algorithm is usually used in conjunction with single spin-flip algorithms such as the Metropolis–Hastings algorithm to achieve ergodicity.


The SW algorithm does however satisfy detailed-balance. To show this, we note that every transition between two Ising spin states must pass through some bond configuration in the percolation representation. Let's fix a particular bond configuration: what matters in comparing the probabilities related to it is the number of factors <math>q=e^{-2\beta J}</math> for each missing bond between neighboring spins with the same value; the probability of going to a certain Ising configuration compatible with a given bond configuration is uniform (say <math>p</math>). So the ratio of the transition probabilities of going from one state to another is
The SW algorithm does however satisfy detailed-balance. To show this, we note that every transition between two Ising spin states must pass through some bond configuration in the percolation representation. Let's fix a particular bond configuration: what matters in comparing the probabilities related to it is the number of factors <math>q=e^{-2\beta J}</math> for each missing bond between neighboring spins with the same value; the probability of going to a certain Ising configuration compatible with a given bond configuration is uniform (say <math>p</math>). So the ratio of the transition probabilities of going from one state to another is
Line 67: Line 67:


== Efficiency ==
== Efficiency ==
Although not analytically clear from the original paper, the reason why all the values of z obtained with the SW algorithm are much lower than the exact lower bound for single-spin-flip algorithms (<math>z\geq\gamma/\nu</math>) is that the correlation length divergence is strictly related to the formation of percolation clusters, which are flipped together. In this way the relaxation time is significantly reduced. Another way to view this is through the correspondence between the spin statistics and cluster statistics in the [[Random cluster model|Edwards-Sokal representation]].<ref>{{Cite journal|last=Edwards|first=Robert G.|last2=Sokal|first2=Alan D.|date=1988-09-15|title=Generalization of the Fortuin-Kasteleyn-Swendsen-Wang representation and Monte Carlo algorithm|url=https://link.aps.org/doi/10.1103/PhysRevD.38.2009|journal=Physical Review D|volume=38|issue=6|pages=2009–2012|doi=10.1103/PhysRevD.38.2009}}</ref>
Although not analytically clear from the original paper, the reason why all the values of z obtained with the SW algorithm are much lower than the exact lower bound for single-spin-flip algorithms (<math>z\geq\gamma/\nu</math>) is that the correlation length divergence is strictly related to the formation of percolation clusters, which are flipped together. In this way the relaxation time is significantly reduced. Another way to view this is through the correspondence between the spin statistics and cluster statistics in the [[Random cluster model|Edwards-Sokal representation]].<ref>{{Cite journal|last1=Edwards|first1=Robert G.|last2=Sokal|first2=Alan D.|date=1988-09-15|title=Generalization of the Fortuin-Kasteleyn-Swendsen-Wang representation and Monte Carlo algorithm|url=https://link.aps.org/doi/10.1103/PhysRevD.38.2009|journal=Physical Review D|volume=38|issue=6|pages=2009–2012|doi=10.1103/PhysRevD.38.2009|pmid=9959355|bibcode=1988PhRvD..38.2009E}}</ref> Some mathematically rigorous results on the mixing time of this process have been obtained by Guo and Jerrum [https://projecteuclid.org/journals/annals-of-applied-probability/volume-28/issue-2/Random-cluster-dynamics-for-the-Ising-model-is-rapidly-mixing/10.1214/17-AAP1335.full].


== Generalizations ==
The algorithm is not efficient in simulating [[Geometrical frustration|frustrated systems]], because the [[Percolation critical exponents|correlation length of the clusters]] is larger than the [[Correlation function (statistical mechanics)|correlation length of the spin model]] in the presence of frustrated interactions.<ref>{{Cite journal|last=Cataudella|first=V.|last2=Franzese|first2=G.|last3=Nicodemi|first3=M.|last4=Scala|first4=A.|last5=Coniglio|first5=A.|date=1994-03-07|title=Critical clusters and efficient dynamics for frustrated spin models|url=https://link.aps.org/doi/10.1103/PhysRevLett.72.1541|journal=Physical Review Letters|volume=72|issue=10|pages=1541–1544|doi=10.1103/PhysRevLett.72.1541}}</ref>
The algorithm is not efficient in simulating [[Geometrical frustration|frustrated systems]], because the [[Percolation critical exponents|correlation length of the clusters]] is larger than the [[Correlation function (statistical mechanics)|correlation length of the spin model]] in the presence of frustrated interactions.<ref>{{Cite journal|last1=Cataudella|first1=V.|last2=Franzese|first2=G.|last3=Nicodemi|first3=M.|last4=Scala|first4=A.|last5=Coniglio|first5=A.|date=1994-03-07|title=Critical clusters and efficient dynamics for frustrated spin models|url=https://link.aps.org/doi/10.1103/PhysRevLett.72.1541|journal=Physical Review Letters|volume=72|issue=10|pages=1541–1544|doi=10.1103/PhysRevLett.72.1541|pmid=10055635|bibcode=1994PhRvL..72.1541C|hdl=2445/13250|hdl-access=free}}</ref> Currently, there are two main approaches to addressing this problem, such that the efficiency of cluster algorithms is extended to frustrated systems.

The first approach is to extend the bond-formation rules to more non-local cells, and the second approach is to generate clusters based on more relevant order parameters. In the first case, we have the [[KBD algorithm]] for the [[Domino tiling|fully-frustrated Ising model]], where the decision of opening bonds are made on each plaquette, arranged in a checkerboard pattern on the square lattice.<ref>{{Cite journal|last1=Kandel|first1=Daniel|last2=Ben-Av|first2=Radel|last3=Domany|first3=Eytan|date=1990-08-20|title=Cluster dynamics for fully frustrated systems|url=https://link.aps.org/doi/10.1103/PhysRevLett.65.941|journal=Physical Review Letters|volume=65|issue=8|pages=941–944|doi=10.1103/PhysRevLett.65.941|pmid=10043065|bibcode=1990PhRvL..65..941K}}</ref> In the second case, we have [[replica cluster move]] for low-dimensional [[Spin glass|spin glasses]], where the clusters are generated based on spin overlaps, which is believed to be the relevant order parameter.


== See also ==
== See also ==
Line 75: Line 78:
* [[Random cluster model]]
* [[Random cluster model]]
* [[Monte Carlo method]]
* [[Monte Carlo method]]
*[[Wolff algorithm]]
* http://www.hpjava.org/theses/shko/thesis_paper/node69.html
* http://www.hpjava.org/theses/shko/thesis_paper/node69.html
*http://www-fcs.acs.i.kyoto-u.ac.jp/~harada/monte-en.html
*http://www-fcs.acs.i.kyoto-u.ac.jp/~harada/monte-en.html


==References==
==References==
{{Reflist}}
*{{cite journal | last1=Swendsen | first1=Robert H. | last2=Wang | first2=Jian-Sheng | title=Nonuniversal critical dynamics in Monte Carlo simulations | journal=Physical Review Letters | publisher=American Physical Society (APS) | volume=58 | issue=2 | date=1987-01-12 | issn=0031-9007 | doi=10.1103/physrevlett.58.86 | pages=86–88| pmid=10034599 | bibcode=1987PhRvL..58...86S }}
*{{cite journal | last1=Swendsen | first1=Robert H. | last2=Wang | first2=Jian-Sheng | title=Nonuniversal critical dynamics in Monte Carlo simulations | journal=Physical Review Letters | publisher=American Physical Society (APS) | volume=58 | issue=2 | date=1987-01-12 | issn=0031-9007 | doi=10.1103/physrevlett.58.86 | pages=86–88| pmid=10034599 | bibcode=1987PhRvL..58...86S }}
*Kasteleyn P. W. and Fortuin (1969) J. Phys. Soc. Jpn. Suppl. 26s:11
*Kasteleyn P. W. and Fortuin (1969) J. Phys. Soc. Jpn. Suppl. 26s:11
*{{cite journal | last1=Fortuin | first1=C.M. | last2=Kasteleyn | first2=P.W. | title=On the random-cluster model | journal=Physica | publisher=Elsevier BV | volume=57 | issue=4 | year=1972 | issn=0031-8914 | doi=10.1016/0031-8914(72)90045-6 | pages=536–564}}
*{{cite journal | last1=Fortuin | first1=C.M. | last2=Kasteleyn | first2=P.W. | title=On the random-cluster model | journal=Physica | publisher=Elsevier BV | volume=57 | issue=4 | year=1972 | issn=0031-8914 | doi=10.1016/0031-8914(72)90045-6 | pages=536–564| bibcode=1972Phy....57..536F }}
*{{cite journal | last1=Wang | first1=Jian-Sheng | last2=Swendsen | first2=Robert H. | title=Cluster Monte Carlo algorithms | journal=Physica A: Statistical Mechanics and Its Applications | publisher=Elsevier BV | volume=167 | issue=3 | year=1990 | issn=0378-4371 | doi=10.1016/0378-4371(90)90275-w | pages=565–579| bibcode=1990PhyA..167..565W }}
*{{cite journal | last1=Wang | first1=Jian-Sheng | last2=Swendsen | first2=Robert H. | title=Cluster Monte Carlo algorithms | journal=Physica A: Statistical Mechanics and Its Applications | publisher=Elsevier BV | volume=167 | issue=3 | year=1990 | issn=0378-4371 | doi=10.1016/0378-4371(90)90275-w | pages=565–579| bibcode=1990PhyA..167..565W }}
*{{cite journal | last=Barbu | first=A. | title=Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities | journal=IEEE Transactions on Pattern Analysis and Machine Intelligence | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=27 | issue=8 | year=2005 | issn=0162-8828 | doi=10.1109/tpami.2005.161 | pages=1239–1253| pmid=16119263 | s2cid=410716 }}
*{{cite journal | last=Barbu | first=A. | title=Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities | journal=IEEE Transactions on Pattern Analysis and Machine Intelligence | publisher=Institute of Electrical and Electronics Engineers (IEEE) | volume=27 | issue=8 | year=2005 | issn=0162-8828 | doi=10.1109/tpami.2005.161 | pages=1239–1253| pmid=16119263 | s2cid=410716 }}

Latest revision as of 10:38, 28 April 2024

The Swendsen–Wang algorithm is the first non-local or cluster algorithm for Monte Carlo simulation for large systems near criticality. It has been introduced by Robert Swendsen and Jian-Sheng Wang in 1987 at Carnegie Mellon.

The original algorithm was designed for the Ising and Potts models, and it was later generalized to other systems as well, such as the XY model by Wolff algorithm and particles of fluids. The key ingredient was the random cluster model, a representation of the Ising or Potts model through percolation models of connecting bonds, due to Fortuin and Kasteleyn. It has been generalized by Barbu and Zhu[1] to arbitrary sampling probabilities by viewing it as a Metropolis–Hastings algorithm and computing the acceptance probability of the proposed Monte Carlo move.

Motivation

[edit]

The problem of the critical slowing-down affecting local processes is of fundamental importance in the study of second-order phase transitions (like ferromagnetic transition in the Ising model), as increasing the size of the system in order to reduce finite-size effects has the disadvantage of requiring a far larger number of moves to reach thermal equilibrium. Indeed the correlation time usually increases as with or greater; since, to be accurate, the simulation time must be , this is a major limitation in the size of the systems that can be studied through local algorithms. SW algorithm was the first to produce unusually small values for the dynamical critical exponents: for the 2D Ising model ( for standard simulations); for the 3D Ising model, as opposed to for standard simulations.

Description

[edit]

The algorithm is non-local in the sense that a single sweep updates a collection of spin variables based on the Fortuin–Kasteleyn representation. The update is done on a "cluster" of spin variables connected by open bond variables that are generated through a percolation process, based on the interaction states of the spins.

Consider a typical ferromagnetic Ising model with only nearest-neighbor interaction.

  • Starting from a given configuration of spins, we associate to each pair of nearest neighbours on sites a random variable which is interpreted in the following way: if then there is no link between the sites and (the bond is closed); if then there is a link connecting the spins (the bond is open). These values are assigned according to the following (conditional) probability distribution:
 ;
 ;
 ;
 ;

where is the ferromagnetic coupling strength.

This probability distribution has been derived in the following way: the Hamiltonian of the Ising model is

,

and the partition function is

.

Consider the interaction between a pair of selected sites and and eliminate it from the total Hamiltonian, defining

Define also the restricted sums:

;

Introduce the quantity

;

the partition function can be rewritten as

Since the first term contains a restriction on the spin values whereas there is no restriction in the second term, the weighting factors (properly normalized) can be interpreted as probabilities of forming/not forming a link between the sites: The process can be easily adapted to antiferromagnetic spin systems, as it is sufficient to eliminate in favor of (as suggested by the change of sign in the interaction constant).

  • After assigning the bond variables, we identify the same-spin clusters formed by connected sites and make an inversion of all the variables in the cluster with probability 1/2. At the following time step we have a new starting Ising configuration, which will produce a new clustering and a new collective spin-flip.

Correctness

[edit]

It can be shown that this algorithm leads to equilibrium configurations. To show this, we interpret the algorithm as a Markov chain, and show that the chain is both ergodic (when used together with other algorithms) and satisfies detailed balance, such that the equilibrium Boltzmann distribution is equal to the stationary distribution of the chain.

Ergodicity means that it is possible to transit from any initial state to any final state with a finite number of updates. It has been shown that the SW algorithm is not ergodic in general (in the thermodynamic limit).[2] Thus in practice, the SW algorithm is usually used in conjunction with single spin-flip algorithms such as the Metropolis–Hastings algorithm to achieve ergodicity.

The SW algorithm does however satisfy detailed-balance. To show this, we note that every transition between two Ising spin states must pass through some bond configuration in the percolation representation. Let's fix a particular bond configuration: what matters in comparing the probabilities related to it is the number of factors for each missing bond between neighboring spins with the same value; the probability of going to a certain Ising configuration compatible with a given bond configuration is uniform (say ). So the ratio of the transition probabilities of going from one state to another is

since .

This is valid for every bond configuration the system can pass through during its evolution, so detailed balance is satisfied for the total transition probability. This proves that the algorithm is correct.

Efficiency

[edit]

Although not analytically clear from the original paper, the reason why all the values of z obtained with the SW algorithm are much lower than the exact lower bound for single-spin-flip algorithms () is that the correlation length divergence is strictly related to the formation of percolation clusters, which are flipped together. In this way the relaxation time is significantly reduced. Another way to view this is through the correspondence between the spin statistics and cluster statistics in the Edwards-Sokal representation.[3] Some mathematically rigorous results on the mixing time of this process have been obtained by Guo and Jerrum [1].

Generalizations

[edit]

The algorithm is not efficient in simulating frustrated systems, because the correlation length of the clusters is larger than the correlation length of the spin model in the presence of frustrated interactions.[4] Currently, there are two main approaches to addressing this problem, such that the efficiency of cluster algorithms is extended to frustrated systems.

The first approach is to extend the bond-formation rules to more non-local cells, and the second approach is to generate clusters based on more relevant order parameters. In the first case, we have the KBD algorithm for the fully-frustrated Ising model, where the decision of opening bonds are made on each plaquette, arranged in a checkerboard pattern on the square lattice.[5] In the second case, we have replica cluster move for low-dimensional spin glasses, where the clusters are generated based on spin overlaps, which is believed to be the relevant order parameter.

See also

[edit]

References

[edit]
  1. ^ Barbu, Adrian; Zhu, Song-Chun (August 2005). "Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities". IEEE Transactions on Pattern Analysis and Machine Intelligence. 27 (8): 1239–1253. doi:10.1109/TPAMI.2005.161. ISSN 0162-8828. PMID 16119263. S2CID 410716.
  2. ^ Gore, Vivek K.; Jerrum, Mark R. (1999-10-01). "The Swendsen–Wang Process Does Not Always Mix Rapidly". Journal of Statistical Physics. 97 (1): 67–86. Bibcode:1999JSP....97...67G. doi:10.1023/A:1004610900745. ISSN 1572-9613. S2CID 189821827.
  3. ^ Edwards, Robert G.; Sokal, Alan D. (1988-09-15). "Generalization of the Fortuin-Kasteleyn-Swendsen-Wang representation and Monte Carlo algorithm". Physical Review D. 38 (6): 2009–2012. Bibcode:1988PhRvD..38.2009E. doi:10.1103/PhysRevD.38.2009. PMID 9959355.
  4. ^ Cataudella, V.; Franzese, G.; Nicodemi, M.; Scala, A.; Coniglio, A. (1994-03-07). "Critical clusters and efficient dynamics for frustrated spin models". Physical Review Letters. 72 (10): 1541–1544. Bibcode:1994PhRvL..72.1541C. doi:10.1103/PhysRevLett.72.1541. hdl:2445/13250. PMID 10055635.
  5. ^ Kandel, Daniel; Ben-Av, Radel; Domany, Eytan (1990-08-20). "Cluster dynamics for fully frustrated systems". Physical Review Letters. 65 (8): 941–944. Bibcode:1990PhRvL..65..941K. doi:10.1103/PhysRevLett.65.941. PMID 10043065.